Next Article in Journal
Missing Well Logs Prediction Based on Hybrid Kernel Extreme Learning Machine Optimized by Bayesian Optimization
Previous Article in Journal
Special Issue on Deep Learning-Based Action Recognition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

NDT Method for Line Laser Welding Based on Deep Learning and One-Dimensional Time-Series Data

Department of Computer and Information Technology, Liaoning Normal University, Dalian 116000, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(15), 7837; https://doi.org/10.3390/app12157837
Submission received: 13 July 2022 / Revised: 29 July 2022 / Accepted: 1 August 2022 / Published: 4 August 2022

Abstract

:
Welding testing is particularly important in industrial systems, but there are still some deficiencies in terms of testing performance, anti-noise capability and defect identification in current mainstream welding non-destructive testing technologies. With the development of structured-light non-destructive testing technology, deep learning technology, signal processing technology and other fields, various possibilities have emerged that make it possible to propose new ideas for welding non-destructive testing. This study used a laser sensor to propose a non-destructive method for testing welding defects in seam contours. In order to solve the problems of low sampling rates and poor recognition accuracy in traditional methods of welding defect detection, the proposed method introduces image coding into laser sensors and applies deep-learning algorithms to the classification and detection of weld defect images. By preprocessing the weld seam by encoding one-dimensional data as two-dimensional images, this method develops a framework for the detection and classification of pre-coded laser weld seam images. After taking the original extracted weld image center trajectory data as one-dimensional sequence data, we utilized the method of encoding one-dimensional time series data as two-dimensional time-series images. In doing so, the one-dimensional laser data can be encoded into the corresponding two-dimensional images and, with the application of a deep neural network, welding defect classification and detection can be realized. Experimentation was used to verify that the proposed method is of higher accuracy than traditional methods for classifying and detecting defects directly from two-dimensional welding images.

1. Introduction

Welding inspection, or the inspection of the quality of welding products, used to ensure the integrity, reliability, safety and usability of the welded product structure, is widely used in aerospace, aviation, automotive, machinery, shipbuilding and other industries.
Although welding technology has developed to a considerably mature stage, welding defects may occur as a result of improper manual operation, unstable environments and welding equipment problems. As shown in Figure 1, the defects can include porosity, concavity, cracks, etc. A good welding defect detection technology will improve the productivity of the manufacturing industry, speed up the production cycle for high-quality products and cut the costs of labor and materials [1].
The most traditional welding inspection method, which is carried out by experienced professionals with naked eyes and specialized tools, not only results in detection inefficiency but also demands a large number of professionals. Moreover, detection accuracy cannot be guaranteed due to visual fatigue and other problems caused by working long hours. Other methods can be categorized as either destructive testing or non-destructive testing methods depending on how the detection is conducted [2]. Non-destructive testing methods can achieve detection without causing any damage to the tested object and thus has been widely used and studied.
The non-destructive testing methods include magnetic particle testing, eddy current testing, magneto-optical imaging testing, ultrasonic testing, infrared testing, penetrant testing and phased array ultrasonic testing, each of which has its own limitations [3,4,5,6,7]. The first three methods have certain requirements for the type of materials to be tested. Specifically, magnetic particle testing requires that the test piece be ferromagnetic material; eddy current testing is only applicable to the detection of surface and near-surface defects in conductive materials, as there are many interference factors involved in the detecting process; and magneto-optical imaging inspection produces images that are unclear, either for the object itself or the background, thus requiring a series of further image processing algorithms to improve the contrast of the magneto-optical image and highlight the welding characteristic information. Ultrasonic testing and infrared testing, on the other hand, have high requirements for the surface of the object to be tested. Ultrasonic testing does not work well on rough surfaces because a rough surface interferes with the ultrasonic projection effect and thus affects the accuracy of the test results. Infrared inspection cannot assess the shape, size or location of welding defects. The performance of penetration inspection is significantly affected by the imaging agent and testing environment. Phased array ultrasonic testing is a new technology in the field of non-destructive testing. It has the advantages of high sensitivity, high resolution and the ability to detect complex workpieces and deep defects. However, this technology also has its shortcomings. Defects are not intuitive, so it is difficult to characterize them, and this technology does not allow real-time detection. It is only suitable for detecting internal defects of more than 5 mm with regular shapes, and it has high operational requirements for operators.
Another method of non-destructive testing, the structured light-based non-destructive testing method, though not sensitive to the internal defects of the weld, has been widely used as an excellent non-destructive testing method for the surface defects of welds. This method is generally implemented to generate original images and data by laser scanning the weldment with either laser points or beams [8]. Structured-light inspection features high accuracy, compact hardware and a high sampling rate, and line structured-light inspection has turned out to have better stability, efficiency and performance. Most of the existing structured-light non-destructive testing methods are based on structured-light images, which cannot give full play to the advantages of high-precision data and do not have good anti-noise properties. With the rise of fields such as deep learning and signal processing, improving the performance of structured-light non-destructive testing with the help of high-performance models of dimensional expansion and convolutional neural networks would be of great value to research. The expansion of the scale and dimensions of defect features through the encoding of high-precision welding data and the recognition of two-dimensional encoded images by making full use of the advantages of mature convolutional neural network models are the focus of this paper.
As a result, starting from the collection of the original grayscale image of the weld contour with a laser sensor, this study used the corresponding welding data as one-dimensional time-series data with all the defect characteristics of the weld, adopting the Gram angle transform and Markov transform methods to encode the data into two-dimensional images. LeNet, AlexNet, ResNet, VGG and other deep learning algorithms were also applied to classify and recognize the four types of welding detection results, which are no defects, holes, burrs and depressions. The flow chart for this method is shown in Figure 2. It was verified through experiments that this method is 4–6% more efficient in identifying and classifying the four types of welding detection result compared to the detection method involving processing welding images directly.
In recent years, non-destructive testing has been widely used in welding manufacturing, additive manufacturing (AM), textile reinforced concrete (TRC), building performance diagnostic inspections and other popular fields. Although NDT technology has been developed over decades and applied in various manufacturing inspection scenarios, it still has shortcomings [9,10,11]. NDT techniques such as visual inspection, magnetic particle inspection, penetrant inspection, ultrasonic inspection, radiographic inspection, acoustic emission and eddy current inspection are mostly manual and heavily dependent upon inspectors’ knowledge and experience, leaving room for errors [12].
The optimization methods for welding defect detection can be divided into three branches: traditional algorithms, machine-learning methods and deep-learning methods. Some scholars have proposed traditional welding defect detection methods based on the morphological and geometric characteristics of welding images. For example, Masoumeh Aminzadeh et al. proposed a welding defect detection background subtraction technique based on grayscale morphology. In this method, which uses optical inspection non-destructive quality monitoring technology, the low computational load associated with the morphological operations used make it more computationally efficient than background subtraction techniques, such as spline approximation and surface fitting. The performance of this technique was tested by applying it to the detection of defects in welds with non-uniform strength distributions where the defects were precisely segmented [13]. Zeng et al. proposed a welding seam recognition method using two directional lights. First, directional lights were projected onto the edges of the seams to generate unique artificial light and shadow (LS) features. Then, image processing algorithms based on threshold and edge extraction were used to calculate the accurate edges of the seams to achieve weld recognition. Finally, a welding seam inspection platform was built, and the welding seam identification and deviation correction experiments were carried out. The experimental results showed that the proposed detection method could effectively identify the edge of the seam [14]. Common weld detection algorithms are likely to be disturbed by the noise from the spatter and arc during the welding process. A weld seam recognition algorithm based on structured-light vision has been proposed to overcome this challenge. The algorithm can be divided into three steps: initial laser center line recognition, online laser center line detection and weld feature extraction. A Laplacian of Gaussian filter is used for the recognition of the laser center line. Then, an algorithm based on the NURBS-snake model detects the laser center line online in a dynamic region of interest (ROI). Using the line obtained from the previous step, feature points are determined through segmentation and straight-line fitting, while the position of the weld seam can be calculated according to the feature points. The accuracy, efficiency and robustness of the recognition algorithm have been verified with experiments [15]. However, the above-mentioned welding detection methods have turned out to have low detection accuracy, and the features of the structure can hardly be applied in other contexts.
With the rapid development of machine-learning technology, a variety of machine learning-based NDT methods have emerged in the field of additive industrial manufacturing [16]. Machine learning (ML) has been applied to various aspects of AM to improve the whole design and manufacturing workflow, especially in the era of industry 4.0. Goh et al. discussed the application of various types of machine-learning techniques to various aspects of additive manufacturing [17]. Various machine-learning methods have been proposed in the field of welding defect detection. Hongquan Jiang et al. proposed a new method for weld defect classification based on the analytic hierarchy process (AHP) and Dempster–Shafer (DS) evidence theory. The method based on DS evidence theory was presented to improve the accuracy of classification and included calculation of the standard values of features based on frequency histogram analysis and an improved Dempster’s rule for combination based on WF. A case study on the classification of steam turbine weld defects was provided to illustrate and evaluate the proposed techniques. [18]. Sudhagar et al. proposed a system for detection and classification of defective welds using weld surface images. The weld surfaces produced under different welding condition were captured by a digital camera and processed to extract features. The features from the weld surface image were extracted with a maximally stable extremal region algorithm and used as the input for classification of the weld joint. The support vector machines algorithm has been used for classification of welds using features from surface images [19]. Lee et al. proposed a system for monitoring the welding of galvanized steel sheets with a spectrometer. In this study, the Fisher criterion was used to achieve defect feature ranking, and the k-nearest neighbor algorithm was used to select the feature ranking and achieve defect classification [20]. Juan-Manuel Alvarado-Orozco et al. proposed a method based on machine learning and the random forest (RF) algorithm to classify porosity and achieved high accuracy. The proposed method was divided into three stages: the preprocessing stage—image denoising, smoothing and unblurring to highlight the areas with pores; the feature extraction stage—segmentation of pores and the morphological/geometrical features that describe the porosity; and the intelligent classifier stage—definition, training, testing and validation of the random forest classifier [21]. Compared to traditional defect detection methods, these machine-learning methods have delivered better results for classification and detection. However, with the rapid development of deep learning in the past few years, deep-learning methods have surpassed machine-learning methods in classification and recognition accuracy.
At present, the development of deep-learning technology is gradually becoming mature. Deep learning has shown better performance in welding image classification than traditional machine learning for the training of a large number of data samples. A variety of methods adopting deep-learning neural networks for welding defect detection have also been proposed. For example, Je-Kang Park et al. proposed a method based on a convolutional neural network (CNN) that uses a single RGB camera to inspect welding defects on the transmission surface of the engine. The proposed method consists of two steps, beginning with extraction of the welding area to be inspected from the captured image. In this first step, to extract the welding area from the captured image, a CNN-based approach is used to detect the center of the engine transmission in the image. In the second stage, the extracted area is identified by another CNN as defective or non-defective [22]. Zhifen Zhang et al. designed an 11-layer CNN classification model based on weld images to identify weld penetration defects. The CNN model has made full use of arc lights by combining them in various ways to form the complementary features. Test results show that the CNN model has better performance than our previous work [23]. Industrial X-ray weld image defect detection is an important research field for non-destructive testing (NDT) [24]. Dong et al. proposed a multitask deep one-class CNN for defect classification. They built a stacked encoder–decoder autoencoder to learn feature representation from normal images. The encoder is used as a feature extractor based on the hard sharing scheme for multitask learning. For defect detection, their approach achieved results almost as good as the supervised method, even without any annotated data [25]. Chen et al. focused on establishing an end-to-end automatic detection model for X-ray welding defects to improve the accuracy and efficiency of detection based on a deep learning algorithm. Considering the feature information of welding defects, this study achieved improvements on the basis of the Faster R-CNN method and used the deep residual network Res2Net to improve the feature extraction ability [26].
In the above-mentioned methods, most of the welding datasets used were composed of X-ray images and welding images collected by a CCD camera. These types of welding images have certain limitations. When the X-ray image and the CCD camera collect the welding image for welding defect classification processing, noise is produced and the accuracy of the classification affected. Various researchers have proposed several noise reduction methods for processing image noise, including skeleton extraction, morphological opening and closing operations, filters and other noise reduction processing methods, but the processing effect is not good enough and processed noise still exists. Furthermore, the X-ray image and the image collected by the CCD camera are not sensitive to welding surface defects, such as depressions and burrs, and the radiation produced when the X-ray image is collected is a problem that needs to be considered. In order to address these downsides, the method proposed in this paper does not directly work on the welding image but processes the corresponding data from the original weld image collected by the laser acquisition device.
Deep learning has achieved considerable advances in the field of computer vision, but, for one-dimensional time series, direct application of general predictive models does not work well. The problem can be attributed to the difficulty of training neural networks, the scarcity of large-scale labeled datasets and insufficient research on 1D-CNNs compared to 2D-CNNs. If we convert one-dimensional time series into corresponding two-dimensional time-series images, we can achieve better recognition results, as in the image field. In the field of time-series classification, many researchers have also proposed related methods. For example, Hatami et al. used recurrence plots (RPs) to transform time series into 2D texture images and then took advantage of the deep CNN classifier [27]. Li et al. first adopted the slide relative position to convert the time series data into 2D images during preprocessing and then employed a CNN to classify these images. This made the best use of the advantages of CNN in image recognition [28].
In welding defect detection and recognition, the results of the structured-light sampling of welds are similar to one-dimensional time series. Therefore, this study extracted the center trajectory of the weld structured-light image strip, regarding it as one-dimensional time-series sampling at a fixed time interval, and treated the entire dataset as one-dimensional time-series data for processing. By encoding the one-dimensional time-series data into two-dimensional time-series images, detection and classification of weld defects can be realized. Compared with other welding inspection methods, the method presented in this study has three advantages: first, it is easier to highlight defects by expanding the dimensions and scales of weld features; second, treating the data points as a one-dimensional time series can make the description of the contour features of the weld more accurate; third, one-dimensional time series can encode two-dimensional images and achieve higher classification accuracy using the existing deep learning model, and the generalization ability of this algorithm is stronger and stable.

2. Method of This Study

Using a laser profile sensor, we sampled and detected the welding seam of a steel plate. By doing so, welding surface classification of the holes, burrs, depressions and non-defect samples in the steel plate welding process was realized. A burr defect is defined herein as any small, visible protrusion in the weld area; a hole defect is defined as any visible void area; and a depression defect is defined as any weld area below the level of the steel plate. The study was carried out in three steps. First, we obtained one-dimensional weld data by performing denoising processing on the gray image of the original weld and extracting the center trajectory of the weld structured-light strip. Then, we obtained the weld data as a one-dimensional time series and converted the sequence code into two-dimensional time-series images. Finally, we used a deep neural network to classify weld defects and verify the advanced nature of the method proposed in this paper. The general process of the study is shown in Figure 2.

2.1. Image Denoising Processing

The original structured-light image had to be smoothed and denoised since there was obvious noise interference in it. We conducted a comparative analysis of the commonly used denoising algorithms, including the median filter, mean filter, Gaussian filter and adaptive median filter algorithms. Considering that the laser light line is narrow, we used an image filter to denoise the original laser image to ensure that the continuity of the laser light line was guaranteed and that large-scale reflection halo noise in the laser sampling was achieved. The adaptive median filter denoising method was used to denoise and smooth the original structured-light image. The processed image is shown in Figure 3b.

2.2. Extraction of the Trajectory of the Weld Center

After the image was denoised, the center trajectory of the light strip of the original structured-light image was extracted using the Steger algorithm. Common light strip center trajectory-extraction algorithms used in welding inspection include the geometric center method, extreme value method, gray barycentric method, direction template method and Hessian matrix method, all of which, however, have certain limitations. Both the geometric center method and the extreme value method have fast extraction speeds, but they are easily affected by image noise. The gray barycentric method is not sensitive to the translation of the light stripe section, though it can reduce the error caused by the asymmetry of gray distribution. The directional template method has high accuracy and good algorithm robustness, but the positioning accuracy is only at the pixel level, and it also involves a large amount of calculations and slow processing speed. The Hessian matrix method has high accuracy but requires multiple, large-scale two-dimensional Gaussian convolutions, and the calculation speed is slow. In comparison to the algorithms mentioned above, the Steger algorithm, based on the Hessian matrix, can achieve sub-pixel-precision positioning of the center of the light strip by finding the Taylor expansion in the normal direction of the light strip with strong robustness and fast processing speeds.
The Steger algorithm calculates the eigenvalues and eigenvectors based on the Hessian matrix to obtain the normal direction of the light strip center of the structured-light image and then performs Taylor expansion in the normal direction to obtain the corresponding normal extreme point, which is the required light strip center trajectory for sub-pixels [29]. For any point ( i , j ) on the structured-light image light line, the Hessian matrix is expressed as:
H ( i , j ) = [ γ i i           γ i j γ j i           γ j j ]
where γ i i , γ i j , γ j i , γ j j , represent the second-order partial derivatives of the image and H ( i , j ) represents the Hessian matrix.
The eigenvector of the maximum eigenvalue of the Hessian matrix corresponds to the normal direction of the light strip, which is represented by ( n i , n j ) . With the point ( i 0 , j 0 ) as the reference point, the sub-pixel coordinates of the center of the light strip are expressed as:
( P i , P j ) = ( i 0 + t n i , j 0 + t n j )
where t is expressed as:
t = n i γ i + n j γ j n i 2 γ i i + 2 n i n j γ i j + n j 2 γ j j
If ( t n i , t n j ) [ 0.5 ,   0.5 ] × [ 0.5 ,   0.5 ] —that is, if the point where the first derivative is zero is located in the current pixel—then ( i 0 , j 0 ) represents the center point of the light strip, and ( p i , p j ) is expressed as sub-pixel coordinates.
In the experiment, the text data for the weld center trajectory were obtained using the Steger algorithm. It was found that the distance intervals in the horizontal direction were the same. If the horizontally equal interval distance unit here is regarded as the time unit of the same time interval, we can take it as a special unit of one-dimensional time-series data in the same dimension. By taking the extension direction of the line laser as the X axis and the height information returned by the line laser sensor as the Y axis, we can construct the image shown in Figure 4.

2.3. One-Dimensional (1D) Data Coding of Two-Dimensional (2D) Time-Series Images (GAF)

When a one-dimensional sequence is converted into a corresponding two-dimensional image, better recognition performance can be achieved by using contemporary machine vision. After analysis, the central trajectory text data obtained in this experiment could be treated as special one-dimensional time series data. We adopted the one-dimensional time series method to process welding data in the experiment and to convert it to the corresponding two-dimensional time-series image.
In the field of welding defect detection, converting one-dimensional sequence data into a two-dimensional image and utilizing depth-learning model analysis is a method worth testing. Through experimental analysis, our central trajectory text data was encoded as a two-dimensional color time-sequence image encoding a neural network learning process. The process used for the two-dimensional color time-sequence images is shown in Figure 5.
This article introduces two frameworks that encode one-dimensional time series as two-dimensional time-series images. The first is a Gram angular field (GAF) method. This method is used to encode time series as pole-based representations rather than Cartesian coordinates. It examines the angle and identifies poor triangle function transformations. If two angles and COS functions are used, GASF is obtained, while if the COS function of the two corners is used, GADF is obtained. The second framework is the Markov transition field (MTF) method, which uses the Markov changes in the time series. Images produced using the MTF method indicate the first-dimensional first-order Markov transfer probability along one axis and the time dependence along another one [30].
The principle of the GAF method is the conversion of the resulting one-dimensional time-series data from a right angle coordinate system to the pole coordinate system and the identification of time dependency at different time points by considering the angles and angular differences between the different points. Images are considered Gram matrixes in the GAF method, each of which is a triangle between different time intervals (i.e., superimposed). There are two implementation methods: GASF (corresponding to the angle) and GADF (corresponding to the angle difference).
Supposing all vectors are in units, the Gram matrix can be written with the following formula:
G 1 = ( c o s ( ϕ 1 , 1 ) c o s ( ϕ 1 , 1 ) c o s ( ϕ 1 , 1 ) c o s ( ϕ 1 , 1 ) c o s ( ϕ 1 , 1 ) c o s ( ϕ 1 , 1 ) c o s ( ϕ 1 , 1 ) c o s ( ϕ 1 , 1 ) c o s ( ϕ 1 , 1 ) )
where ϕ m , n is an angle of two vectors. Single-variable time sequences can only explain data characteristics and potential status to some extent, so our goal was to find an alternative with richer representations. The answer was the Gram matrix, which retains time dependency. In the Gram matrix obtained after the implementation, the diagonal elements provided information on each feature while the remaining elements provided information about these features’ relations to each other. Therefore, the Gram matrix can not only show the features of the data but also reflects the close links between different features.
In a given time series X = { x 1 , x 2 , , x n } , in order to ensure the inner spot is not biased toward the maximum observation, we normalize X to set all values in the time series at intervals [−1, 0] or [0, 1]:
x ˜ 1 m = ( x m max ( X ) + ( x m m i n ( X ) ) max ( X ) m i n ( X )
x ˜ 0 m = x m m i n ( X ) max ( X ) m i n ( X )
where m a x ( X ) and m i n ( X ) are the maximum and minimum values in the time series, and x ˜ 1 m and x ˜ 0 m are the results normalized to the interval [–1, 0] and [0, 1]. By encoding the value as the angle cosine and the timestamp as a radius, we denote the time series X ˜ , and the equation is as follows:
{ Φ = arccos ( x ˜ m ) , 1 x ˜ m 1 , x ˜ m X ˜ r = Ʈ m N , Ʈ m N
In Equation (7), Ʈ m is a timestamp, and N is a constant factor that standardizes the polar coordinate span. x ˜ m is the normalized time series element value of [–1, 1], Φ is the angle in polar coordinates and r is the radius in polar coordinates. This conversion has two advantages.
On the one hand, c o s ( ϕ ) is double-shot, ϕ [ 0 , π ] and it is also monotonous. Therefore, the proposed map generates a unique result in the unique inverse mapping in the pole coordinate system with the given time series. On the other hand, the conversion to polar coordinate representations can retain time dependency.
After converting the time series into a pole coordinate form, it is possible to consider the triangular representation and the difference between each point in order to determine the time dependency of different time intervals with an angle view. The Gramian summation angular field (GASF) and Gramian difference angular field (GADF) are defined as follows:
{ G A S F = [ c o s ( ϕ m + ϕ n ) ] G A D F = [ s i n ( ϕ m ϕ n ) ]
{ c o s ( ϕ m + ϕ n ) = X ˜ · X ˜ I X ˜ 2 · I X ˜ 2 s i n ( ϕ m ϕ n ) = I X ˜ 2 · X ˜ X ˜ I X ˜ 2  
where I is a unit line vector [ 1 ,   1 , ,   1 ] . The two types of Gramian angular fields (GAFSs) are actually quasi-Gram matrixes [ x ˜ 1 , x ˜ 1 ] .
The transformed polar coordinate representation can constitute a new class of Gram matrix:
G 2 = ( c o s ( ϕ 1 + ϕ 1 ) c o s ( ϕ 1 + ϕ 1 ) c o s ( ϕ 1 + ϕ n ) c o s ( ϕ 2 + ϕ 1 ) c o s ( ϕ 2 + ϕ 1 ) c o s ( ϕ 2 + ϕ n )         c o s ( ϕ m + ϕ 1 ) c o s ( ϕ m + ϕ 1 ) c o s ( ϕ m + ϕ n ) )
G 3 = ( s i n ( ϕ 1 ϕ 1 ) s i n ( ϕ 1 ϕ 1 ) s i n ( ϕ 1 ϕ n ) s i n ( ϕ 2 ϕ 1 ) s i n ( ϕ 2 ϕ 1 ) s i n ( ϕ 2 ϕ n )         s i n ( ϕ m ϕ 1 ) s i n ( ϕ m ϕ 1 ) s i n ( ϕ m ϕ n ) )
In the above formula, G 2 is a Gram matrix obtained with the GASF method, while G 3 is a Gram matrix obtained with the GADF method. The one-dimensional time series is encoded as a GASF matrix and a GADF matrix by the above algorithm.

2.4. One-Dimensional (1D) Data Coding of Two-Dimensional (2D) Time-Series Image (MTF)

The Markov transition matrix is not sensitive to the time dependency of the sequence. Based on the first-order Markov chain, and considering the time position relation, we used the Markov transition field (MTF) method in this study [31].
When a time series X is given to define the packet number box Q of the time series, and each X i in the time series is assigned to the respective storage box q j ( j [ 1 , Q ] ) , the weighted adjacent matrix W , which can be constructed as Q × Q , is converted from the first-order Markov chain count-point box. In order to overcome this shortcoming so that W is not sensitive to the distribution of X and the time dependency of the time step, a new definition of the Markov transition field (MTF) was used as follows:
M = ( W i j | x 1 q i , x 1 q j W i j | x 1 q i , x n q j     W i j | x 2 q i , x 1 q j W i j | x 2 q i , x n q j W i j | x n q i , x 1 q j W i j | x n q i , x n q j     )
The W i j in the MTF here is the probability of the   q i   q j transition. By considering the time position, the matrix W containing the transition probability for the size axis is extended to the MTF matrix. The main diagonal in the MTF matrix, M i j , captures the probability of transition from each quantile to itself (self-transition probability). To improve the computational efficiency, a fuzzy kernel is used to average the pixels in each non-overlapping patch m × m ; that is, the transformation probability in each subsequent length m is aggregated. By doing so, one-dimensional time-series data are encoded into a Markov transition field (MTF) matrix.

2.5. Neural Network Model

One of the earliest neural networks, the LeNet network has reduced heavy computing costs through the use of convolution, parameter sharing and pooling [32]. After the LeNet network, the AlexNet network appeared and verified the efficiency of deep convolutional neural networks. It uses the ReLU function as the activation function for the neural network, dropout regularization to control the overfitting and the parallel computing power of the GPU to accelerate the training of the network [33]. After the AlexNet network, the VggNet neural network appeared. In contrast to the AlexNet network, the VggNet neural network adopts several continuous 3 × 3 convolution kernels to replace the large convolution (11 × 11, 5 × 5) in AlexNet, and removes the local response normalization layer in AlexNet [34]. With a deeper network structure, a smaller convolution kernel and a pooled sampling domain, the VGG network model manages to control the number of parameters while obtaining more image features, thus avoiding excessive computation and an overly complex structure, and many powerful networks have improved on it [35]. Compared to the traditional VGG network, the ResNet network, which appeared later, is less complex and requires a smaller number of parameters. It utilizes a residual network in which the gradient does not disappear when the number of network layers is increased. Therefore, the classification accuracy is improved, and the deep network degradation problem is solved [36].

3. Experiment

3.1. Settings for the Experimental Environment

The main hardware equipment used in this study included an HD6-0050 laser profile sensor and an Alienware R12 computer.
The deep-learning model in the experiment was run on an Alienware R12 integrated machine in a Windows 10 operating system, and the neural network model was trained using a PyCharm compiler based on Python 3.7. The compiler environment was Keras 2.2.4, TensorFlow-GPU 2.2.0, CUDA 10.1 and cuDNN 7.6.
In the experiment, the weld line structured-light image data were collected in two stages. In the first stage, data were collected in such a way that the line structured-light was perpendicular to the weld. In the second stage, data were collected in such a way that the line structured-light was at a 30 degree angle to the weld. After the extended processing, a total of 6720 original weld structural light images were obtained. Four image defect types were manually classified and labeled: burr, hole, edge bite and no defect. The laser had a measuring range of 50 mm with a horizontal resolution of 0.06 mm and a resolution of 0.001 mm for height data. The generated image size was 1080 × 520. Figure 6 shows the two-dimensional grayscale structured-light images of the four types of welding defect classifications collected.

3.2. Experimental Process

The extracted trajectory data for the weld center were regarded as one-dimensional time-series data with special time units, and the image coding of the two-dimensional time series was carried out. The image coding was used to code two-dimensional time-series images of four weld defect types with the GAF, GDF and MTF methods. The experimental results for the separate network model showed that the classification results were better when the global variance of the 2D time-series images was the maximum. Therefore, in the process of encoding one-dimensional time series as two-dimensional time-series images, the overall variance in the generated two-dimensional time-series images was adjusted to the maximum value by adjusting parameters. Finally, the sizes of all two-dimensional time-series images were unified as 500 × 500 pixels, which corresponded to the center trajectory data of all the welds. Finally, GAF, GDF and MTF two-dimensional time-series images of the four defect types were generated, as shown in Figure 7.
We optimized the network model in this experiment. In the neural network model, the shallow neural network generally uses maximum pooling to filter useless information, while the deep neural network uses average pooling to prevent too much high-dimensional information from being lost. Here, we changed the average pooling layer of the LeNet network to the maximum pooling layer to filter out some of the useless information in the image. The calculation of the ReLu function was carried out after the convolution. The use of the ReLu activation function in the deep network meant that the problem of gradient disappearance and explosion could be avoided. At the same time, the calculation speed was faster and the training of the network accelerated. The activation function for the four network models was set as the ReLu function.
The Adam optimizer is one of the most popular optimizers in deep learning, and its easy refinement means that it can obtain good results quickly. The Adam optimizer combines the best of AdaGrad and RMSProp. Adam uses the same learning rate for each parameter and adapts independently as the learning goes on. Adam uses the historical information from the gradient for a momentum-based algorithm [37]. Due to these advantages, AlexNET, VGG-16 and RESNET-50 all used the Adam optimizer for model training in the experiment.
In the experiment, fewer types of defects, and specifically fewer void defects, were collected in the original gray image of the weld, so the data augmentation operation was carried out for these data types in the image. For the image with no specific defect types, symmetry and rotation were used to enlarge the image data. For the image with the hole type, the image data were enlarged by rotating the origin of the image 90 degrees to the right.
After image data augmentation, the sequences of images of each defect type were randomly disrupted. The training set, test set and verification set in the experiment were divided according in a ratio for the number of images of 3:1:1. The proportion of grayscale image data for the original weld structured-light images described above corresponded to the subsequent proportions of GASF time-series images, GADF time-series images and MTF time-series images.

3.3. Experimental Analysis

After the network parameters were set, four types of images were trained and tested in the GPU environment. Due to the limited number of datasets, we decided to carry out experiments with four classical neural networks. The training results for the last four types of images and the four kinds of networks are shown in Table 1 and Figure 8.
Figure 8 shows the final test accuracy for the images of the four defect types trained by the four network structures. Due to the shallowness of the LeNet network, large fluctuations were inevitably generated when training large-scale data, and more iterations were required. It is very obvious that the number of converged iterations in the experiment reached 1000. Even so, LeNet showed the lowest training time and high test accuracy, which cannot be ignored. We provide a comparison of the network inference times in Table 2. As can be seen from Table 2, with the increase in the number of neural network model layers, longer training times were required and the overall accuracy was constantly improved, whereas LeNet needed the shortest time while still maintaining high accuracy. With 3090 graphics card hardware, an average iteration only took three seconds, and the two-dimensional time-series image had the advantage of being computed a little faster than the original image. When LeNet was used in real-time monitoring, it was possible to achieve better results than with the other three neural network models. From the AlexNet network to the Resnet-50 network, with the increase in the network layer numbers and training time, the number of iterations required for the network training curve to become stable decreased, and it became stable after 200 iterations. Moreover, the fluctuation range of the network was further reduced, and the accuracy rate was further improved. It can be seen that high accuracy often requires making a sacrifice in the form of more network training time, and stable training requires a more powerful neural network. An excellent and suitable neural network model is critical for training fluctuations, network inference time and accuracy. The right column in Figure 8 shows the loss curves corresponding to the four types of images from the four network models. We found a regularity in the loss curve corresponding to the LeNet-5 network. With the same network parameters, compared to the original structured-light image, the time-series image showed a more obvious convergence trend, in which the loss curves from the GASF and GADF methods converged rapidly in the first 50 iterations. This tendency was also reflected in the AlexNet, VGG-16 and ResNet-50 networks, which is consistent with what we initially expected. The encoded image was encoded with one-dimensional weld data with high-precision weld defect feature information, and the essence of the methods was the extension of defect feature information from the one-dimensional scale to the two-dimensional scale. As CNN has a strong generalization ability for two-dimensional images, it achieved more sensitive and faster recognition of time-series images. In addition, we found that the loss curves of the four different types of images had a general rule under the same network. This can be clearly seen from the standard deviation values of the loss curves shown in Table 2. With the same network parameters, the loss in the encoded images was more stable, and GADF performed the best, which also shows that two-dimensional time-encoded images are very beneficial for neural network learning.
It can be seen in Table 1 that the classification recognition rates of the four networks for the coded time-series images were all higher than that of the original structured-light images. With the LeNet-5 network, the recognition accuracy for the GASF time-series images and GADF time-series images reached above 90%, which was 7.75% higher than that of the original structured-light image. In the AlexNet network, the GADF-type time-series image recognition rate was the highest at 98.47%, which was 6.59% higher than the original structured-light image. The recognition rate for the GADF time-series images in the VGG-16 network reached 99.60%, which was 6.12% higher than that of the original structured-light images. The Resnet-50 network had the highest recognition rate for GADF time-series images, up to 98.96%, which was 4.02% higher than that of the original structured-light images.

4. Conclusions

This paper presents a method for analyzing the weld center trajectory data of original gray structured-light images collected with a laser profile sensor device as one-dimensional time-series data. Taking all the defect features of the weld into consideration and adopting a one-dimensional signal processing method, we encoded the two-dimensional time-series images as one-dimensional time series and applied four deep learning neural network models—LeNet, AlexNet, Resnet-50 and VGG-16—to classify and detect four common types of welding defect classifications: burr, concavity, porosity and non-defect. We propose a non-destructive testing method based on linear structured light for encoding one-dimensional weld data, which, compared to the latest non-destructive testing technology, has unparalleled advantages. It has characteristics including fast and efficient detection performance, easy operation, high stability, low costs and excellent robustness. The method is based on high-precision structured-light non-destructive testing equipment, and the detection accuracy was found to be greatly improved by the dimensional expansion and by giving full play to the excellent performance of the CNN network for two-dimensional images. Experiments showed that the method for welding seam non-destructive testing utilizing encoding of high-precision one-dimensional weld seam data as two-dimensional time-series images and combining the advantages of deep neural network was very effective. This validated our hypothesis regarding the power of the combination of defect dimension expansion and deep learning. Among the four deep learning neural network models, the VGG-16 neural network model achieved the highest accuracy of 99.60%. The classification accuracy for the two-dimensional time-series images respectively related to the four defect types was significantly improved, with the overall accuracy being about 4% to 6% higher than that of the traditional structured-light images.
In future research work, we plan to attempt cross-modal fusion of one-dimensional and two-dimensional features by building on existing research. In addition, we suspect that this method may be applicable not only to one-dimensional weld data but also to the one-dimensional ultrasonic data used in other recent NDT technologies, thus making it possible to achieve better detection of deep defects.

5. Patent

This study resulted in a Chinese national invention patent, which is currently published but not officially authorized. Patent title: Nondestructive inspection of welds based on deep learning and two-dimensional time series images. Publication number: CN113947583A; application number: CN202111222050.0; publication date: 2022.1.18.

Author Contributions

Conceptualization, Y.L. and K.Y.; methodology, Y.L. and K.Y.; software, K.Y. and T.L.; validation, T.L. and S.L.; formal analysis, Y.L.; investigation, T.L. and K.Y.; resources, Y.L. and K.Y.; data curation, K.Y.; writing—original draft preparation, Y.L., K.Y. and Y.R.; writing—review and editing, Y.R.; visualization, K.Y., T.L. and S.L.; supervision, Y.R.; project administration, Y.L. and Y.R.; funding acquisition, Y.L. and Y.R. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Fundamental Research Funds for the National Science Foundation of China (Grant Nos 62002146 and 61976109), LiaoNing Revitalization Talents Program (Grant Nos XLYC1807073 and XLYC2006005), Liaoning Provincial Key Laboratory Special Fund, Dalian Key Laboratory Special Fund and Shanghai Samin Software Information Technology Co., Ltd., (2019JBM026).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

We have made public all of the weld datasets that supported the results of this study. All weld datasets have been transferred to OneDrive for management. The link address is: https://1drv.ms/u/s!Avc_-c0rg5w7lQMdw6FzwMZYZAmr?e=6wqesl (accessed on 10 July 2022).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Madhvacharyula, A.S.; Pavan, A.V.S.; Gorthi, S.; Chitral, S.; Venkaiah, N.; Kiran, D.V. In situ detection of welding defects: A review. Weld. World 2022, 66, 611–628. [Google Scholar] [CrossRef]
  2. Machado, M.A.; Rosado, L.F.S.G.; Mendes, N.A.M.; Miranda, R.M.M.; dos Santos, T.J.G. New directions for inline inspection of automobile laser welds using non-destructive testing. Int. J. Adv. Manuf. Technol. 2022, 118, 1183–1195. [Google Scholar] [CrossRef]
  3. Wang, X.; He, C.; He, H.; Xie, W. Simulation and experimental research on nonlinear ultrasonic testing of composite material porosity. Appl. Acoust. 2022, 188, 108528. [Google Scholar] [CrossRef]
  4. Gao, X.; Li, Y.; Zhou, X.; Dai, X.; Zhang, Y.; You, D.; Zhang, N. Multidirectional magneto-optical imaging system for weld defects inspection. Opt. Lasers Eng. 2020, 124, 105812. [Google Scholar] [CrossRef]
  5. Chen, Y.; Feng, B.; Kang, Y.; Liu, B.; Wang, S.; Duan, Z. A Novel Thermography-Based Dry Magnetic Particle Testing Method. IEEE Trans. Instrum. Meas. 2022, 71, 9505309. [Google Scholar] [CrossRef]
  6. He, Y.; Deng, B.; Wang, H.; Cheng, L.; Zhou, K.; Cai, S.; Ciampa, F. Infrared machine vision and infrared thermography with deep learning: A review. Infrared Phys. Technol. 2021, 116, 103754. [Google Scholar] [CrossRef]
  7. Chabot, A.; Laroche, N.; Carcreff, E.; Rauch, M.; Hascoët, J.Y. Towards defect monitoring for metallic additive manufacturing components using phased array ultrasonic testing. J. Intell. Manuf. 2020, 31, 1191–1201. [Google Scholar] [CrossRef]
  8. Chen, S.; Tao, W.; Zhao, H.; Lv, N. A Review: Application Research of Intelligent 3D Detection Technology Based on Linear-Structured Light. Trans. Intell. Weld. Manuf. 2021, 3, 35–45. [Google Scholar] [CrossRef]
  9. Ospitia, N.; Tsangouri, E.; Pourkazemi, A.; Stiens, J.H.; Aggelis, D.G. NDT inspection on TRC and precast concrete sandwich panels: A review. Constr. Build. Mater. 2021, 296, 123622. [Google Scholar] [CrossRef]
  10. Chauveau, D. Review of NDT and process monitoring techniques usable to produce high-quality parts by welding or additive manufacturing. Weld. World 2018, 62, 1097–1118. [Google Scholar] [CrossRef]
  11. El Masri, Y.; Rakha, T. A scoping review of non-destructive testing (NDT) techniques in building performance diagnostic inspections. Constr. Build. Mater. 2020, 265, 120542. [Google Scholar] [CrossRef]
  12. Gupta, M.; Khan, M.A.; Butola, R.; Singari, R.M. Advances in applications of Non-Destructive Testing (NDT): A review. Adv. Mater. Process. Technol. 2021, 7, 1–22. [Google Scholar] [CrossRef]
  13. Aminzadeh, M.; Kurfess, T. A novel background subtraction technique based on grayscale morphology for weld defect detection. In Nondestructive Characterization and Monitoring of Advanced Materials, Aerospace, and Civil Infrastructure; SPIE: Bellingham, WA, USA, 2016; Volume 10599, pp. 264–268. [Google Scholar]
  14. Zeng, J.; Chang, B.; Du, D.; Hong, Y.; Zou, Y.; Chang, S. A visual weld edge recognition method based on light and shadow feature construction using directional lighting. J. Manuf. Process. 2016, 24, 19–30. [Google Scholar] [CrossRef]
  15. Wang, N.; Zhong, K.; Shi, X.; Zhang, X. A robust weld seam recognition method under heavy noise based on structured-light vision. Robot. Comput.-Integr. Manuf. 2020, 61, 101821. [Google Scholar] [CrossRef]
  16. Nasiri, S.; Khosravani, M.R. Machine learning in predicting mechanical behavior of additively manufactured parts. J. Mater. Res. Technol. 2021, 14, 1137–1153. [Google Scholar] [CrossRef]
  17. Goh, G.D.; Sing, S.L.; Yeong, W.Y. A review on machine learning in 3D printing: Applications, potential, and challenges. Artif. Intell. Rev. 2021, 54, 63–94. [Google Scholar] [CrossRef]
  18. Jiang, H.; Wang, R.; Gao, Z.; Gao, J.; Wang, H. Classification of weld defects based on the analytical hierarchy process and Dempster–Shafer evidence theory. J. Intell. Manuf. 2019, 30, 2013–2024. [Google Scholar] [CrossRef]
  19. Sudhagar, S.; Sakthivel, M.; Ganeshkumar, P. Monitoring of friction stir welding based on vision system coupled with Machine learning algorithm. Measurement 2019, 144, 135–143. [Google Scholar] [CrossRef]
  20. Lee, S.H.; Mazumder, J.; Park, J.; Kim, S. Ranked feature-based laser material processing monitoring and defect diagnosis using k-NN and SVM. J. Manuf. Process. 2020, 55, 307–316. [Google Scholar] [CrossRef]
  21. García-Moreno, A.I.; Alvarado-Orozco, J.M.; Ibarra-Medina, J.; Martínez-Franco, E. Image-based porosity classification in Al-alloys by laser metal deposition using random forests. Int. J. Adv. Manuf. Technol. 2020, 110, 2827–2845. [Google Scholar] [CrossRef]
  22. Park, J.K.; An, W.H.; Kang, D.J. Convolutional neural network based surface inspection system for non-patterned welding defects. Int. J. Precis. Eng. Manuf. 2019, 20, 363–374. [Google Scholar] [CrossRef]
  23. Zhang, Z.; Wen, G.; Chen, S. Weld image deep learning-based on-line defects detection using convolutional neural networks for Al alloy in robotic arc welding. J. Manuf. Process. 2019, 45, 208–216. [Google Scholar] [CrossRef]
  24. Akcay, S.; Breckon, T. Towards automatic threat detection: A survey of advances of deep learning within X-ray security imaging. Pattern Recognit. 2022, 122, 108245. [Google Scholar] [CrossRef]
  25. Dong, X.; Taylor, C.J.; Cootes, T.F. Defect Classification and Detection Using a Multitask Deep One-Class CNN. IEEE Trans. Autom. Sci. Eng. 2021, 19, 1719–1730. [Google Scholar] [CrossRef]
  26. Chen, Y.; Wang, J.; Wang, G. Intelligent Welding Defect Detection Model on Improved R-CNN. IETE J. Res. 2022, 2022, 1–10. [Google Scholar] [CrossRef]
  27. Hatami, N.; Gavet, Y.; Debayle, J. Classification of time-series images using deep convolutional neural networks. In Proceedings of the Tenth International Conference on Machine Vision (ICMV 2017), Vienna, Austria, 13–15 November 2017. [Google Scholar]
  28. Li, T.; Zhang, Y.; Wang, T. SRPM–CNN: A combined model based on slide relative position matrix and CNN for time series classification. Complex Intell. Syst. 2021, 7, 1619–1631. [Google Scholar] [CrossRef]
  29. Steger, C. An unbiased detector of curvilinear structures. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 113–125. [Google Scholar] [CrossRef] [Green Version]
  30. Wang, Z.; Oates, T. Imaging Time-Series to Improve Classification and Imputation. In Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, Buenos Aires, Argentina, 25–31 June 2015. [Google Scholar]
  31. Wilinski, A. Time series modeling and forecasting based on a Markov chain with changing transition matrices. Expert Syst. Appl. 2019, 133, 163–172. [Google Scholar] [CrossRef]
  32. El-Sawy, A.; El-Bakry, H.; Loey, M. CNN for handwritten arabic digits recognition based on LeNet-5. In Proceedings of the International Conference on Advanced Intelligent Systems and Informatics, Cairo, Egypt, 24–26 October 2016; Volume 533, pp. 566–575. [Google Scholar]
  33. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Processing Syst. 2012, 25, 84–90. [Google Scholar] [CrossRef]
  34. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  35. Ding, X.; Zhang, X.; Ma, N.; Han, J.; Ding, G.; Sun, J. Repvgg: Making vgg-style convnets great again. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; ISBN 978-1-6654-4509-2. [Google Scholar]
  36. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
  37. Yuan, W.; Hu, F.; Lu, L. A new non-adaptive optimization method: Stochastic gradient descent with momentum and difference. Appl. Intell. 2022, 52, 3939–3953. [Google Scholar] [CrossRef]
Figure 1. The pictures show three kinds of defects and a sample with no defects: (a) burr, (b) concave, (c) porosity, (d) no defects.
Figure 1. The pictures show three kinds of defects and a sample with no defects: (a) burr, (b) concave, (c) porosity, (d) no defects.
Applsci 12 07837 g001
Figure 2. Flow chart for the proposed method.
Figure 2. Flow chart for the proposed method.
Applsci 12 07837 g002
Figure 3. Structured-light image noise reduction process: (a) original structured-light image; (b) structured-light image after denoising.
Figure 3. Structured-light image noise reduction process: (a) original structured-light image; (b) structured-light image after denoising.
Applsci 12 07837 g003
Figure 4. Visualization of the trajectory data for the weld center.
Figure 4. Visualization of the trajectory data for the weld center.
Applsci 12 07837 g004
Figure 5. Coding process for welding center track data using GAF and MTF methods.
Figure 5. Coding process for welding center track data using GAF and MTF methods.
Applsci 12 07837 g005
Figure 6. The images show structured-light images of different weld defect type classifications with sizes of 1080 × 520 collected by the experimental equipment: (a) burr structured-light image; (b) concavity structured-light image; (c) porosity structured-light image; (d) defect-free structured-light image.
Figure 6. The images show structured-light images of different weld defect type classifications with sizes of 1080 × 520 collected by the experimental equipment: (a) burr structured-light image; (b) concavity structured-light image; (c) porosity structured-light image; (d) defect-free structured-light image.
Applsci 12 07837 g006
Figure 7. Four types of images encoded with the GAF, GDF and MTF methods (ad). The time-series images represent four defect type classifications: burr, concavity, porosity and no defects.
Figure 7. Four types of images encoded with the GAF, GDF and MTF methods (ad). The time-series images represent four defect type classifications: burr, concavity, porosity and no defects.
Applsci 12 07837 g007
Figure 8. Training curve results for four different types of images using LeNet-5, AlexNet, VGG-16 and ResNet-50, where the images on the (a,c,e,g) show the test accuracy curves and the images on the (b,d,f,h) show the corresponding loss curves.
Figure 8. Training curve results for four different types of images using LeNet-5, AlexNet, VGG-16 and ResNet-50, where the images on the (a,c,e,g) show the test accuracy curves and the images on the (b,d,f,h) show the corresponding loss curves.
Applsci 12 07837 g008aApplsci 12 07837 g008b
Table 1. The test accuracies (ACC) and curve standard deviations (SD1) for four different types of images using four different neural networks. The values marked in bold are the highest ACC value in each column.
Table 1. The test accuracies (ACC) and curve standard deviations (SD1) for four different types of images using four different neural networks. The values marked in bold are the highest ACC value in each column.
IMAGE TYPELENET-5ALEXNETVGG-16RESNET-50
ACCSD1ACCSD1ACCSD1ACCSD1
BMP86.50%0.0888591.88%0.2875193.48%0.2444194.94%0.26628
GASF94.25%0.0436597.47%0.1696199.21%0.2045996.91%0.25283
GADF91.25%0.0322198.47%0.1754599.60%0.1589398.96%0.24685
MTF87.75%0.0608096.96%0.2416898.23%0.2316997.83%0.24363
Table 2. The network model inference time (TIME) and loss curve standard deviation (SD2) for the four different types of images with four different neural networks.
Table 2. The network model inference time (TIME) and loss curve standard deviation (SD2) for the four different types of images with four different neural networks.
IMAGE TYPELENET-5ALEXNETVGG-16RESNET-50
TIMESD2TIMESD2TIMESD2TIMESD2
BMP56 min0.176641 h 43 min0.170773 h 44 min0.214917 h 17 min0.18710
GASF51 min0.071151 h 48 min0.105423 h 35 min0.179197 h 20 min0.10612
GADF54 min0.070821 h 41 min0.095923 h 42 min0.168637 h 11 min0.10551
MTF53 min0.169261 h 52 min0.135713 h 39 min0.184137 h 23 min0.14390
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, Y.; Yuan, K.; Li, T.; Li, S.; Ren, Y. NDT Method for Line Laser Welding Based on Deep Learning and One-Dimensional Time-Series Data. Appl. Sci. 2022, 12, 7837. https://doi.org/10.3390/app12157837

AMA Style

Liu Y, Yuan K, Li T, Li S, Ren Y. NDT Method for Line Laser Welding Based on Deep Learning and One-Dimensional Time-Series Data. Applied Sciences. 2022; 12(15):7837. https://doi.org/10.3390/app12157837

Chicago/Turabian Style

Liu, Yang, Kun Yuan, Tian Li, Sha Li, and Yonggong Ren. 2022. "NDT Method for Line Laser Welding Based on Deep Learning and One-Dimensional Time-Series Data" Applied Sciences 12, no. 15: 7837. https://doi.org/10.3390/app12157837

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop