Next Article in Journal
Utilizing the Random Forest Method for Short-Term Wind Speed Forecasting in the Coastal Area of Central Taiwan
Next Article in Special Issue
Machine-Learning-Based Classification for Pipeline Corrosion with Monte Carlo Probabilistic Analysis
Previous Article in Journal
Energy Management Optimization of Fuel Cell Hybrid Ship Based on Particle Swarm Optimization Algorithm
Previous Article in Special Issue
A Pipe Ultrasonic Guided Wave Signal Generation Network Suitable for Data Enhancement in Deep Learning: US-WGAN
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Deep Learning for Magnetic Flux Leakage Detection and Evaluation of Oil & Gas Pipelines: A Review

1
Department of Electrical Engineering, Tsinghua University, Beijing 100084, China
2
School of Physical Science and Engineering, Beijing Jiaotong University, Beijing 100044, China
*
Author to whom correspondence should be addressed.
Energies 2023, 16(3), 1372; https://doi.org/10.3390/en16031372
Submission received: 27 December 2022 / Revised: 24 January 2023 / Accepted: 26 January 2023 / Published: 29 January 2023
(This article belongs to the Special Issue Detection and Diagnosis in Oil and Gas Pipelines)

Abstract

:
Magnetic flux leakage testing (MFL) is the most widely used nondestructive testing technology in the safety inspection of oil and gas pipelines. The analysis of MFL test data is essential for pipeline safety assessments. In recent years, deep-learning technologies have been applied gradually to the data analysis of pipeline MFL testing, and remarkable results have been achieved. To the best of our knowledge, this review is a pioneering effort on comprehensively summarizing deep learning for MFL detection and evaluation of oil and gas pipelines. The majority of the publications surveyed are from the last five years. In this work, the applications of deep learning for pipeline MFL inspection are reviewed in detail from three aspects: pipeline anomaly recognition, defect quantification, and MFL data augmentation. The traditional analysis method is compared with the deep-learning method. Moreover, several open research challenges and future directions are discussed. To better apply deep learning to MFL testing and data analysis of oil and gas pipelines, it is noted that suitable interpretable deep-learning models and data-augmentation methods are important directions for future research.

1. Introduction

Pipelines are the most commonly used facilities for transporting oil and gas resources over long distances. Given the rapid increases in the service age, geological changes, medium erosion, man-made damage, and other factors, the problem of oil and gas pipeline failures has attracted extensive attention globally. Regular detection and evaluation are effective means to ensure the integrity and reliability of oil and gas pipelines. There are many defect detection techniques, such as the electromagnetic ultrasonic guided wave (EUGW) test [1,2], the magnetic flux leakage (MFL) test [3,4], the eddy current (EC) [5,6] test, microwave detection [7,8], and acoustic emission (AE) detection [9,10]. Among these, the MFL testing technique is one of the most widely used pipeline detection methods owing to its advantages, such as high reliability, low requirements on the detection environment without a couplant, a good detection effect on high-permeability pipe defects, etc.
By analyzing the detected MFL signal, the safety status of oil and gas pipelines can be evaluated. However, the transmission distances of oil and gas pipelines are typically hundreds or even thousands of kilometers long, and the amount of MFL data collected is very large. Manual analysis methods have many problems, such as low efficiency, high rates of misjudgment, and high labor costs. Thus, the current MFL data analysis based on human experts has limited efficiency and reliability. The traditional manual analysis methods can no longer meet the current stringent requirements for pipeline safety assessments. Therefore, analyzing and estimating MFL data automatically, efficiently, and reliably are important research directions for the MFL testing technology for oil and gas pipelines. Intelligence is one of the important goals pursued in the field of nondestructive testing (NDT). In traditional signal-analysis methods, the results often include the effects of human factors, which may lead to inaccuracies and even unreal outcomes. Therefore, introducing artificial intelligence (AI)-based methods to MFL data analysis can largely avoid the effects of human factors while improving the accuracy and reliability of the analysis results observably.
As an important development direction of AI technologies, deep learning can significantly improve the efficiency of data analysis. The concept of deep learning was first proposed by Prof. Geoffrey Hinton of the University of Toronto in 2006 [11]. This “end-to-end” representative learning manner does not rely on handcrafted features. Its basic idea involves automatically extracting abstract features from target objects by constructing a multilayer network to achieve multilayer representations of the targets and obtain better feature robustness. In recent years, deep-learning methods have become a hot spot in the field of signal analysis. They have enabled many breakthroughs in object detection, image reconstruction, fault diagnosis, and data augmentation, with further extension to the field of NDT as well [12,13]. As a typical application of deep learning in NDT, Luo, Q. [14], Munir, N. [15], and Melville, J. [16] compared shallow machine learning with deep neural networks based on classification tasks. They all reached the same conclusion: Deep neural networks can automatically extract high-dimensional information without complex feature pre-processing methods. Applying deep learning to MFL detection and evaluation of oil and gas pipelines can, therefore, greatly reduce the workload of the labor force, improve the accuracy of pipeline fault identification, and help realize the intellectualization of pipeline safety detection.
Presently, the application of deep learning to MFL detection in oil and gas pipelines is still in its infancy. The majority of the publications surveyed are from the previous five years. Feng, J. (Northeastern University) [17], Yang, L. J. (Shenyang University of Technology) [18], Huang, S. L. (Tsinghua University) [19], and Ossai, C. I. (South Australia University) [20] et al. led the research and exploration. However, there is no relevant review to the best of our knowledge. Therefore, it is necessary to conduct a comprehensive survey to analyze the latest state-of-the-art research. As the first of its kind, this paper reviews the applications of deep learning to MFL detection in oil and gas pipelines based on three aspects: pipeline anomaly recognition, defect inversion, and MFL data augmentation. The contributions of this work are given as follows.
  • Review the latest state-of-the-art research on MFL detection and estimation in oil and gas pipelines based on deep learning techniques comprehensively.
  • Summarize and discuss the application of deep learning for pipeline anomaly recognition, object detection, defect quantification, and data augmentation, respectively.
  • Compare the deep-learning methods with the traditional methods and highlight the advantages and importance of deep learning applied in MFL detection and estimation.
  • Discuss existing challenges and explore the research and development prospects for the future.
The following is a breakdown of the survey. Section 2 presents the related work. Section 3 briefly introduces the MFL testing technique concerning oil and gas pipelines. In Section 4, the classic structure of the convolution neural network (CNN) is discussed. Section 5, Section 6 and Section 7 present the applications of deep learning in anomaly recognition, defect quantification, and MFL data augmentation of oil and gas pipelines separately. Section 8 gives an insight into the future directions. Finally, Section 9 draws the conclusions. Figure 1 and Table 1 show the organization and the list of acronyms of the paper, respectively.

2. Related Work

Related surveys have been conducted on pipeline safety assessments based on MFL inspection. Mohamed, S. H. et al. [21] surveyed the artificial neural networks-based safety assessment methods of oil and gas pipelines. The classification methods only refer to the metal defects rather than all types of pipe anomalies. Three years later, they published a more comprehensive survey based on the computational intelligence technique [22]. They stated that intelligent techniques such as data mining, neural networks, and hybrid neuro-fuzzy systems are promising alternatives for detecting, estimating, and classifying pipeline defects. Shi, Y. et al. [23] elaborated the principle, measuring methods and quantitative analysis algorithms of MFL testing. Feng, Q. S. et al. [24] investigated the inspection principle, weld-defect identification, and quantification methods based on in-line inspection technologies. But they only focused on the girth-weld defect in the oil and gas pipelines. Vanaer, H. R. et al. [25] summarized the pipelines’ corrosion growth rate models, including deterministic and probabilistic models. These two models belong to traditional prediction methods. Nasser, A. M. M. et al. [26] compared the artificial intelligence approaches with the traditional approaches for the corrosion growth rate, mainly containing the artificial neural network (ANN) and fuzzy logic (FL). However, these two reviews only focused on defect prediction rather than defect identification and quantification. Peng, X. et al. [27] explored and discussed the state-of-the-art MFL signal processing, defect characterization, data matching, and growth prediction methods from the data analytic perspective. Liu, Y. M. et al. [28] surveyed the machine-learning approaches used for pipeline-condition assessments based on routine operation data, nondestructive testing data, and computer vision data. SVM, linear regression, the Gaussian process, and shallow neural networks were surveyed to detect, classify, locate, and quantify pipeline anomalies. However, only one reference [29] on CNN application is contained in this survey, and the object detection and data-augmentation methods have not been contained. Table 2 presents a comprehensive summary of the related surveys.
After analyzing Table 2, it is evident that all the existing surveys have not contained the pipeline object detection and MFL data-augmentation methods. Only a few investigations on deep learning for MFL signal estimation have been surveyed. Therefore, it is necessary to provide insight into the latest developments in deep learning for the MFL tasks of oil and gas pipelines, including pipe anomaly recognition, object detection, defect quantification, and MFL data augmentation. We also outline several open research challenges and trends in this research field for substantial future research.

3. Magnetic Flux Leakage Testing Technique of Oil and Gas Pipelines

3.1. Basic Principles of Magnetic Flux Leakage Testing

Magnetic flux leakage (MFL) testing is an effective technique to detect the anomalies of oil and gas pipelines. In MFL testing, the ferromagnetic pipe wall is saturated magnetization by an external magnetic field. Magnetic sensors are placed near the surface of the pipe wall to detect the MFL signals. The magnetic field forms a closed loop through the magnet, magnetic yoke, and pipe wall. Due to the change of pipe wall thickness near defects or other anomalies, magnetic field lines will be distorted under the saturated magnetization state of the pipeline. As a result, partial magnetic field lines will leak from the pipe wall, which forms magnetic flux leakage. The magnetic sensors are then used to detect the MFL signal so as to conduct qualitative and quantitative analysis of the pipeline. Figure 2 shows the basic schematic diagram of pipeline MFL testing.

3.2. Main Anomalies of Oil and Gas Pipelines

The pipe anomalies can usually be divided into three categories: defects, welds, and special components. Due to the morphological difference between these pipe anomalies and the pipe wall, MFL signals will be generated near them. It is meaningful to know about these anomalies for MFL analysis.
(1)
Defects
Pipeline defects mainly include metal loss, corrosion, sag, weld defect, etc. Figure 3 shows these defects of pipeline. Among them, the metal loss is a common kind of defect in the pipeline. It contains metal scratches, pits, and so on, mainly due to man-made damage. The corrosion occurs due to outside environmental erosion (air or soil) and inside medium erosion (oil or gas). The increase in corrosion area will pose a severe threat to the pipeline’s safe operation. The sag is the cross-section change caused by permanent plastic deformation of the pipeline, which generally occurs during the excavation of pipeline construction. The weld defect is caused by the incompleteness of welded joints, mainly containing welding cracks, incomplete welding, slag inclusion, pores, etc. Weld defects tend to cause pipe-welding fractures.
(2)
Welds
Welds are the most important anomalies of the long-distance oil and gas pipelines, which are used to connect the pipe segments. Pipeline welds commonly include girth, spiral, and straight welds. Figure 4 shows these welds of pipeline. Among them, the girth weld is formed by connecting straight pipes through welding technology. The spiral welds and straight welds are produced by the spiral or straight welding process of steel plates or strips, respectively. Since the material at the weld is quite different from the pipe, and the wall thickness at the welding is usually not uniform, the leakage magnetic field will be generated around the weld.
(3)
Special components
The special components of a pipeline usually include the flange, tee, small opening, valve, patch, elbow, etc. Figure 5 shows these special components. The flange and tee are both connections between the pipes, which will form a gap at the joint. The small opening, valve, patch, elbow, and other pipeline components will change the structural characteristics of the pipeline, thus forming a leakage magnetic field.
These main anomalies of oil and gas pipelines will cause the distortion of the leakage magnetic field in MFL testing. Among them, the defects are the most important anomalies used to assess the damaged state of pipelines. The welds and special components are common anomalies in pipelines and are mainly used for positioning and calibration. Therefore, after acquiring MFL detection signals, identifying and classifying these pipeline anomalies is of great significance.

3.3. Pipeline Safety Assessment Process

The process of pipeline safety assessment is shown in Figure 6, which is usually divided into five steps as follows.
Step 1: MFL detection and signal acquisition. The structure of the MFL internal detector is shown in Figure 7. It consists of a drive section, detection section, storage section, battery section, and mileage wheel, and a universal joint connects each part. After entering the pipeline, the detector is driven forward by oil and gas pressure, magnetizing the pipeline wall and collecting the MFL signal simultaneously. The collected MFL data will be recorded into the storage device. After the detector comes out of the pipeline, the data will be exported from the detector to the host computer for further display and analysis.
Step 2: Signal preprocessing and display. Pre-processing for the MFL signal is necessary before comprehensive analysis. The first step is to calibrate the MFL data, considering the inconsistency of each channel sensor. Usually, the channel sensors will be tested before detection to obtain the calibration parameters. The second step is to perform the DC component filtering operation on the calibrated data. The preprocessed MFL signal can be displayed as a curve graph, gray-scale image, or pseudo-color image to present the pipeline MFL signal clearly and completely.
Step 3: Anomaly recognition. After signal pre-processing, it is necessary to effectively identify all kinds of pipeline anomalies from the MFL signal. Anomaly recognition mainly includes two parts: classification and location. The anomaly classification operation can distinguish defects from other pipeline anomalies and obtain the defects’ regions for the following defect inversion and damage assessment process. The accurate anomaly location can guide the subsequent pipeline excavation and repair work efficiently.
Step 4: Defect quantification. After extracting the MFL signal from the defect area, the defect can be quantitatively analyzed, obtaining its equivalent length, width, and depth. The depth of defect is the most important parameter for pipeline damage evaluation.
Step 5: Safety assessment and prediction. The safety assessment and prediction of the pipeline is the last step of the MFL inspection process. It is combined with the defect inversion results, historical data, and pipeline operation parameters to evaluate and predict the state so as to grasp the pipeline operation state more comprehensively.

4. Convolutional Neural Network

The convolutional neural network (CNN), one of the most popular deep-learning algorithms, performs well in image classification and recognition. Therefore, the application of CNN in pipeline MFL signal analysis has broad prospects. This section gives a brief introduction to CNN.
CNN is a feedforward deep neural network with the convolutional structure. It can automatically learn the signal features from images based on forward propagation and then change network parameters based on negative feedback. By training the network, the network model can be optimized automatically. CNN has been widely used in the fields of image recognition and classification [30,31], video recognition [32,33], natural language processing [34,35], audio retrieval, visual tracking [36,37], etc.
The basic structure of CNN is shown in Figure 8, which consists of the input layer, pooling layer, and fully connected layer.
The convolutional layer is used to extract features in the input image. The convolution kernel is adopted to translate along the horizontal or vertical direction of the input image. With each step forward, the convolution kernel will convolute with the pixels at the current position of the image until traversing all the pixels. The number of convolution kernels in CNN positively correlates with the number of extracted image features. The convolution operation can enhance the features of the signal and reduce the noise.
The pooling layer, also known as the subsampling layer, is used to reduce the dimension and compress image information. Generally, maximum pooling or average pooling is the most common operation, taking the maximum value or average value in the receiving area as the result of the pooling layer. The pooling operation can reduce the number of neurons in the network, thus reducing the model examples and avoiding overfitting.
The fully connected layer is used to organize and synthesize the extracted features. The fully connected layer takes a multilayer perceptron as the basic network structure. Each input neuron and output neuron is connected with a certain weight and bias. The output result can be obtained by inputting the feature vector to the fully connected layer and mapping with the activation function of the convolutional neural network.
The activation function can enable the network to obtain a nonlinear modeling ability. Considering that the convolution and pooling operations in CNN are both linear operations, the nonlinear activation function is introduced to improve the accuracy of image recognition. Commonly used activation functions include Sigmoid, Tanh, and ReLU functions. Among them, the sigmoid function is used to filter data. The Tanh function is used to output data, and the ReLU function is used to process data with fewer positive values.
After initializing the network parameters, the CNN model is trained in a supervised way. The forward-propagation process extracts and maps features through convolution, pooling, activation functions, and other operations. The back-propagation process updates the network parameters by comparing the predicted value with the actual value to realize feedback supervision. CNN can learn features in images independently, and its weight-sharing advantage can reduce the network’s calculation. The first CNN, Time Delay Neural Network was proposed by Prof. Waibel, A. et al. in 1987. Nowadays, after decades of development, many classical network structures and algorithms have been derived based on the CNN model. Some of these algorithms have been successfully applied to anomaly classification and target detection of pipeline MFL signal and have achieved good results.

5. Anomaly Recognition of Pipeline

In the early days, the identification of pipeline anomalies was carried out manually by specialized analysts, which placed high demands on the professionalism of the analysts. However, the amount of MFL signal data is enormous. The low efficiency of manual identification is always accompanied by missing or even false identification. With the development of computing and automation technology, threshold analysis methods [38] and template matching methods [39] are gradually rising. However, the threshold analysis method needs to manually set the threshold for pipeline anomaly recognition. Since the pipelines have different pipe diameters, wall thicknesses, materials, and other conditions, the corresponding threshold values are inconsistent. So, the manual threshold setting cannot meet all situations. The template matching method needs to obtain the MFL signals of pipeline anomalies in advance for comparison. All kinds of pipeline anomalies need to be prepared in advance, which belongs to the category of statistical methods and is usually only applicable to identify defects with regular shapes. None of the above methods can meet today’s high engineering demands. The introduction of artificial intelligence provides a new and efficient solution for pipeline anomaly recognition. This section will focus on the applications of deep learning, especially CNN, in pipeline anomaly classification and target detection.

5.1. Traditional Pipeline Anomaly Recognition Methods

The traditional pipeline anomaly recognition method contains three steps: region of interest (ROI) selection, artificial anomaly extraction, and anomaly classification, which are shown in Figure 9.
(1)
ROI selection
ROI region selection is used to select the most concerned anomaly region from the massive MFL signal. Threshold selection is the simplest and most commonly used region selection method [40,41]. By setting an appropriate threshold value, the region where the amplitude or gradient value of the MFL signal exceeds the threshold can be obtained. The amplitude of the MFL signal for the same pipe anomaly may fluctuate under different detection states, such as the different pipes or detectors. Therefore, manual debugging and setting operations are necessary before each detection. In addition, boundary recognition is another method for the region selection, such as the Prewitt detection operator [42], canny edge detection operator [43], etc. These methods can identify the edges of various anomalies directly. However, they are complex and only suitable for local information recognition.
(2)
Anomaly extraction
Anomaly extraction is the key step of classification for the MFL signal. The commonly used methods define feature terms artificially [44], such as signal peak and valley value, average signal intensity, signal rise rate, valley width, etc. These methods are always interfered with by subjective factors and require manual extraction. In addition, principal component analysis (PCA), linear discriminant analysis (LDA), and independent component analysis (ICA) are also used in feature extraction. PCA [45] is a linear transformation method in image processing, which converts a set of potentially correlated variables into linearly uncorrelated variables through orthogonal transformation. LDA [46] projects the original features from the initial high-dimensional to the low-dimensional, separating the original data effectively. ICA [47] can make the transformed higher-order statistics independent of each other and reflect the essential features of the data. These methods are all based on the covariance matrix, and the features processed can be directly obtained from the MFL signal, which belongs to the invisible representation. So, explicit feature extraction representation is a critical characteristic that distinguishes deep-learning-based object detection methods from traditional ones.
(3)
Anomaly Classification
The anomaly classification of oil and gas pipelines is used to identify the defects and other pipe components based on the obtained features of the MFL signal. Traditional anomaly classification methods mainly contain the support vector machine (SVM) model [48], extreme learning machine (ELM) [49], Deformable part model (DPM) [50], Random Forest [51,52], shallow neural network model [53], etc. However, these traditional methods rely on manual feature extraction or definition, which will introduce noise and lead to low detection accuracy.

5.2. Pipeline Anomaly Recognition Methods Based on Deep Learning

Massive classic CNN application cases show that the performance of CNN-based algorithms has far exceeded traditional methods in the accuracy of classification. In some circumstances, it has reached or even exceeded the level of human cognition. Therefore, it has become the preferred method of image classification. The commonly used CNN classification algorithms [54] include LeNet, AlexNet, VGG, GoogleNet, etc. LetNet-5, proposed by LeCun et al. [55] in 1990, is a very efficient CNN for handwritten character recognition, promoting deep-learning development. This network is the starting point of CNN, and many subsequent networks are optimized based on this model. However, it was not until Alex et al. [56] proposed AlexNet, a deep convolutional neural network model, that deep learning attracted widespread attention and gradually became a research hot spot. VGG model is a new Deep CNN jointly designed by the Visual Geometry Group of Oxford University and Google’s Deep Mind research team in 2014 [57]. It inherits part of the structural ideas of the LeNet and AlexNet models. VGG is suitable for large-scale image classification and recognition since the small convolution kernel and multiple convolution sublayers are used in this model. Moreover, the expression ability of the VGG model is significantly improved. In the same year, the GoogleNet design by Google’s research team came out [58], which adopted the inception modular structure and replaced the full connection layer with average pooling. Furthermore, two additional auxiliary classifiers are added to the network to avoid the phenomenon of vanishing gradients. As a result, this model performs better on training efficiency and recognition accuracy.
The above CNN algorithms have been partially applied to the anomaly classification of oil and gas pipelines and have achieved excellent results. Li, F. et al. [59] adopted the CNN to classify the MFL response segments, including the defect, tee, and cathodic protection. The rectified linear units (ReLUs) are employed as the activation functions in the convolution layers to obtain better sparse representation. Feng, J. et al. [29] embedded two local response normalization (LRN) [56] layers into the CNN structure and successfully identified the injurious and noninjurious defects from the MFL images. Liu, S. C. et al. [60] proposed an improved deep residual convolutional neural network to classify pipeline anomalies, including welds, tees, flanges, and corrosions. This deep residual network is based on a VGG16 convolution neural network, and the attention modules are introduced to reduce the influence of noises and compound features. Yang, L. J. et al. [18] adopted an MFL image CNN classification method based on sparse self-coding. This method can classify the girth welds and spiral welds from 500 MFL signal images, of which classification accuracy is 95.1%. Zhang, M. et al. [61] improved the CNN structure by adding a dropout layer, successfully diagnosing and identifying three types of defects in spiral welds and ring welds. Table 3 summarizes the application of deep learning for pipeline anomaly recognition. It can be seen that the accuracy of the classification results based on deep learning approaches is always higher than 0.95.

5.3. Pipeline Object Detection Methods Based on Deep Learning

Object detection is one of the most important research directions in the computer vision field. In the MFL testing of oil and gas pipelines, object detection automatically extracts the pipeline anomaly regions from the MFL signal and then identifies and classifies all the types of anomalies. Compared with the anomaly classification methods mentioned in Part 5.2, object detection includes the location information of the anomalies. There are three major difficulties in the object detection of long-distance MFL testing of oil and gas pipelines. First, the amount of MFL testing data of oil and gas pipelines is usually very huge, bringing significant challenges in analyzing data efficiently and automatedly. Second, the MFL testing data usually contains a wide variety of pipeline anomalies. These MFL features presented by them are not the same. So, it is challenging to identify these object features based on one single detection method. Third, since the detection environment of the pipeline is very harsh, noise is inevitably introduced in the data acquisition process. Some signals of pipe anomalies may be difficult to extract, or may even be ignored, because of their small discrimination with the background noise. Traditional object detection methods cannot solve the above three difficulties satisfactorily. Introducing a deep-learning algorithm into the object detection of MFL testing of oil and gas pipelines provides a new and feasible solution.
Object detection algorithms based on deep learning are divided into two categories: two-stage and one-stage algorithms. Their different structures are shown in Figure 10. The two-stage algorithms represented by RCNN [62], Fast RCNN [63], Faster RCNN [64], etc., first generate the region proposal and then predict the classification and location of the target through the CNN. On the other hand, one-stage algorithms represented by YOLO series [65] (v1–v5) and SSD series [66] (RSSD, DSSD, FSSD) directly extract features through convolutional neural networks to predict object classification and localization, which implement the end-to-end detection.
Currently, the target detection algorithm based on deep learning has gradually been applied to the anomaly recognition of MFL signals of oil and gas pipelines. Wang, G. Q. et al. [67] proposed an improved YOLOv5 algorithm by introducing the loss function Distance-IoU and improving the non-maximum suppression algorithm to successfully identify and locate different types of defects from the MFL signals of the pipeline. Yang, L. et al. [68] introduced the dilated convolution and attention residual module into the SSD algorithm, which achieves a good identification result for the defects, girth weld, and spiral weld from the pipeline MFL image. Jiang, L. et al. [69] proposed a cycle-supervised CNN (CsCNN) to realize the unsupervised pipeline anomaly detection without any labels or prior information. The CsCNN model includes multiple CNNs and a cycle-supervised unit, detecting the anomalies for pipeline MFL testing unsupervised for the first time. The precision of the anomaly detection is 0.935. The experiment results showed good object detection performance for a large number of anomalies. Besides CNN, a two-stage heterogeneous signal mutual supervision network (THMS-Net) is proposed by Jiang, L. et al. [70] to identify the weak defect signals and non-defect signals. This THMS-Net consists of two deep networks to extract axial MFL features and radial MFL features, supervising mutually to enhance the effect of discrimination. Table 4 summarizes the application of deep learning for pipeline object detection.
In general, the application of deep-learning algorithms in anomaly recognition of pipelines is still in its infancy. However, it has already shown super performance compared with the traditional methods. Moreover, the unsupervised detection and detection process from a two-stage algorithm to a one-stage algorithm is a development trend. In recent years, the development of deep learning based on the CNN model has been extremely fast. Therefore, it is foreseeable that the development of deep learning will inevitably boost the progress of MFL testing technology for oil and gas pipelines.

6. Defect Quantification of Pipeline

After locating and classifying the pipeline anomalies from the detected MFL signals, the critical anomalies that are dangerous to the pipeline’s safe operation can be accurately extracted: defects. In the next step, a quantitative analysis of defects is very important. The size of the defect, especially the defect depth, is an important index to evaluate the damage degree of the pipeline.

6.1. Traditional Pipeline Defect Quantification Methods

Traditional pipeline defect quantification methods [71,72] can divide into direct and indirect methods according to whether the closed-loop iterative structure is introduced in the process of MFL signal analysis. The direct methods quantify the defect size by obtaining the quantitative relationship between the geometric size of the defect and the characteristics of the MFL signal through statistical analysis. The multiple nonlinear regression method is mainly used. However, these methods always have low calculation accuracy and excessively depend on the empirical data. The indirect methods, also known as the closed-loop pseudo-inverse methods, combine the forward model and the closed-loop iterative structure to achieve the optimal update of dimensions. These methods rely heavily on the accuracy of the forward model and easily fall into a local optimum. Machine learning is also commonly used for defect quantification. Its procedure mainly includes four steps: feature extraction, feature selection, pattern recognition, and regression [73]. The machine-learning techniques mainly include the shallow neural network and support vector machine methods. Compared with the traditional direct and indirect methods, the quantitative performance has been significantly improved. Those four commonly used traditional defect quantification methods mentioned are briefly introduced as follows.
(1)
Multiple nonlinear regression methods
The multiple nonlinear regression methods establish a nonlinear relationship between the equivalent size of the defect and the characteristics of the MFL signal based on the regression method [71,72,74]. These methods have good quantification performance for regular-shape defects, such as groove, cylindrical, or pit defects. For example, the length of a groove defect can be regression-evaluated based on parameters such as peak-to-peak spacing, valley-to-valley spacing, and slope-peak spacing of the MFL signal. The width of the groove defect can be regression-evaluated according to the number of fluctuating channels, the peak value of the MFL signal, etc. Generally, the length and width information of the defect can be effectively extracted by the explicit characteristics of the MFL signal. However, the depth of the defect, which has a complex implicit relationship with the MFL, is difficult to be quantified accurately. Usually, only artificial defects have such a regular shape. Therefore, these methods do not perform well in quantifying irregular natural defects. In addition, this method requires the MFL signal features of the defect to be defined and extracted manually.
(2)
Closed-loop pseudo-inverse methods
The closed-loop pseudo-inverse methods contain a closed-loop iteration structure. It is called pseudo-inverse because this inversion problem is solved by building a forward calculation model in the quantization process. By comparing the MFL signals calculated by the forward model with the detected MFL signal, the defect size parameters input to the forward model will be optimized based on the optimization algorithm until the calculated signal is similar enough to the detected signal. It can be known that the forward model is an essential part of quantization. The accuracy of the forward model determines the quantitative effect of the whole closed-loop pseudo-inverse method. Commonly used forward models include the magnetic dipole model (MDM) [75], the finite element model (FEM) [76], and the neural network model [77]. In addition, the genetic algorithm (GA) [78], the particle swarm optimization (PSO) algorithm [79], and the simulated annealing method [80] are usually used to optimize and update the defect quantization parameters. Due to the closed-loop iterative structure, the closed-loop pseudo-inverse method can reduce the dependence on samples compared with the direct method. However, there is a high requirement for the quantitative accuracy of the forward model. Otherwise, the model is difficult to converge.
(3)
Shallow neural network methods
The shallow neural network, also known as the artificial neural network, is a method of abstracting and simulating the human brain. It has the advantages of distributed parallel processing, nonlinear mapping, adaptive learning, high robust fault tolerance, etc. So, it is widely used in pattern recognition, optimal control, information processing, and fault diagnosis. At present, the shallow neural networks used for defect quantification mainly include the BP neural network [81], the RBF neural network [82,83], the wavelet neural network [84], and so on. However, the number of hidden layers in a shallow neural network is limited. Thus, it is challenging to learn deeper information of a defect MFL signal. In contrast, the deep neural network has better information representation ability.
(4)
Support vector machine methods
Support vector machine (SVM) is a machine-learning method based on statistical theory. The advantage of SVM is that it can improve the generalization ability of the learning machine based on the principle of structural risk minimization. In other words, the small error results obtained from the limited training samples can guarantee that the error for the independent test set is still small [85,86]. Since the SVM algorithm is a convex optimization problem, it can solve the local optimization problem effectively. The above advantages of SVM cannot be achieved by the shallow neural network. However, these methods need to define and extract defect MFL signal features manually. Therefore, they are still far away from real intelligence.

6.2. Pipeline Defect Quantification Methods Based on Deep Learning

Pipeline anomaly recognition belongs to the classification problem in deep learning, while pipeline defect quantification belongs to the regression problem. The outputs of the classification problem are some formulated class labels, such as the pipeline anomaly types. The regression problem focuses on analyzing the quantitative relationships, and the outputs are some real values. The mean absolute error (MAE), mean squared error (MSE), root mean square error (RMSE) difference, and logistic regression loss/cross-entropy loss functions are usually used to evaluate the regression predictions.
At present, only a few papers report deep-learning-based pipeline defect quantification methods. As of the first attempt in pipeline MFL detection, they have achieved good quantification results. Wang, H. A. et al. [87] established a defect quantization model for MFL signals, including CNN and regression modules. The three components of MFL signals are fed into the CNN module to extract the features automatically. The joint loss function of defect length, width, and depth are designed into the regression module to quantify the defect scale. Lu, S. X. et al. [88] proposed a novel visual transformation CNN (VT-CNN) to estimate the defect size of oil and gas pipelines. This method adds a visual transformation layer in the CNN to highlight the feature information of defects, which can transform the original MFL measurements to a 3D image at any angle of view. In practical applications, three VT-CNNs are adopted to quantify the defects’ length, width, and depth. Zhang, M. et al. [89] put forward a visual deep transfer learning neural network (VDTL) to predict the defect size. A multi-kernel maximum mean discrepancy (MK-MMD) transfer learning algorithm is introduced into the Alexnet network to improve the accuracy. The quantization errors for length and depth are only 0.67mm and 0.97%. Wu, Z. N. et al. [90] proposed a deep reinforcement learning (RL)-based reconstruction solution to estimate the defect depth in the MFL inspection. They embedded the classic iteration-based method into the learning process of the deep RL-based algorithm proposed and learned the policy from the data generated during the iteration. The experiment results showed that the peak depth error (PDE) is less than 2.94%. Figure 11 shows the similarity and difference between the iteration-based method and the RL-based method. In addition, Xiong, J. Y. et al. [91] used an improved sparrow search algorithm (ISSA) to optimize the input layer weight and bias parameters of the deep extreme learning machine (DELM) and further constructed a comprehensive ISSA-DELM deep learning network model. This ISSA-DELM model is proven to improve the quantitative performance of natural gas pipeline defects.

6.3. Applications of Interpretable Deep Learning Model

Complex deep learning models always contain lots of knowledge and can support large-scale, high-dimensional, and nonlinear data analysis. However, at the same time, their internal representation learning process and decision-making process are increasingly uninterpretable, and the execution logic is difficult to understand for human beings. This kind of model is called the black box model, which hinders the intelligence and automation degree of their security applications. Therefore, the interpretability of deep learning has gradually attracted more attention. The existing methods of interpretability can be divided into three stages according to the model training process: before, during, and after model training. (1) Before model training, the data can be visualized and statistically analyzed to realize the interpretability. (2) In the process of model training, it refers to the interpretability of the training model, which is achieved by constructing interpretable components in the training process. The explainable boosting machine (EBM) [92], the explainable neural network based on generalized additive model (GAMxNN) [93], and GAMINET [94] are the typical intrinsically interpretable methods. (3) After training, the analyzing and explaining of the decision basis of the black box model based on the methods of sensitivity analysis or gradient-based interpretation is performed. Local interpretable model-agnostic explanations (LIME) [95], shapley additive explanations (SHAP) [96], and accumulated local effects plot (ALE) [97] are the commonly used post hoc interpretation methods. The interpretable methods before and after model training focus on interpreting the predicted behavior of the model. The interpretable methods during model training give more attention to the structure of the training model itself, which may improve the model’s prediction performance.
The application of the interpretable deep learning model is very useful for pipeline defect quantification. Sun, H. Y. et al. [98] made the first attempt at the interpretability of deep-learning models in the field of oil and gas pipelines with the MFL test. They propose a physics-informed doubly fed cross-residual network (DfedResNet), which is suitable for MFL defect detection based on deep learning, to estimate the defect size. During neural network training, the physics-based MFL defect quantification theory is integrated into loss functions. This interpretability DfedResNet model can significantly reduce the quantization error. The experimental results showed that the defect length and width quantification errors are less than 0.3 mm, while the defect depth quantification error is no more than 0.4%. Feng, J. et al. [17] established a domain-knowledge-based deep-broad learning framework (DK-DBLF). This framework consists of a task-specific feature extractor and a flexible fault recognizer. The first part is used to obtain abstract features automatically. The second part is used to improve the flexibility of the framework. And the bridge label-based strategy is designed to integrate domain knowledge into the learning process. The proposed DK-DBLF is tested on pipeline defect datasets to estimate the pipeline defect size, which shows a good estimation performance.
Table 5 summarizes the application of deep learning for pipeline defect quantification. It can be seen that the interpretable deep-learning model shows a better performance in defect size estimation, especially for defect depth. In general, the application of deep-learning algorithms in the defect quantification of the pipeline is worth further study. Although only a few studies have been reported, the quantification accuracy of defects has been greatly improved compared with the traditional methods. In addition, due to its advantages on small datasets, the interpretable deep neural network will significantly contribute to the defect quantification methods for pipeline MFL testing.

7. MFL Data Augmentation

The above parts summarize the applications of deep learning in anomaly recognition and the defect quantification of pipeline MFL detection. The deep-learning model needs large, annotated datasets for training. Otherwise, it is easy to fall into overfitting. However, it takes time and cost to implement the MFL inspection project of oil and gas pipelines. Besides, the finite element simulation method cannot comprehensively simulate the actual MFL signal. Thus, it is a challenging proposition to obtain a large number of suitable MFL datasets. Therefore, it is necessary to adopt data-augmentation methods to expand the datasets to provide essential data support for deep learning.

7.1. Traditional Pipeline MFL Data Augmentation Methods

The traditional pipeline MFL data-augmentation methods perform geometric transformations based on existing training samples. They include translation, rotation, flipping, scaling, elastic deformation, and shear operations. For the MFL testing data, the geometric change operation of the first four is more commonly used (Figure 12), while elastic deformation and shear operation have not been used because they may change the fundamental properties of the data. After imaging the multichannel MFL-detected data by a grayscale diagram or pseudo-color diagram, the following transformation operations are used to achieve data augmentation: (1) Translation operation. Translate the original MFL image along a given vector; (2) Rotation operation. Rotate the original MFL image clockwise at 90°, 180°, or 270°; (3) Flipping operation. Flip the original MFL image along the vertical or horizontal central axis; (4) Scaling operation. Scale the original MFL image from 0.5 to 2 times.
Traditional data-augmentation methods can improve the accuracy of network training within a specific range. However, there will be a training saturation problem [99]. The training effect cannot be further improved when the data is expanded to a certain extent. This is mainly due to the fact that the traditional methods have not added any new additional information to the MFL data.
For the MFL data of oil and gas pipelines, the finite element and magnetic dipole models can also be used to expand the dataset. However, there are differences between the simulated data and the real detected data, which make it difficult to reproduce the real detection data accurately.

7.2. Pipeline MFL Data Augmentation Methods Based on Deep Learning

Deep-learning methods are used to produce new training samples that have never been seen before based on existing training samples. Generative Adversarial Network (GAN) is a deep-learning network structure for image generation proposed by Goodfellow [100] et al. in 2014. The network consists of a generator (G network) and a discriminator (D network). Figure 13 shows the basic framework of the GAN. The basic idea of the GAN is derived from the adversarial game. The generative network is used to obtain false images by adding the random noise vector. The discriminant network is used to judge the authenticity of images. The results are fed back to the generative network and then improve the generative network. It is demonstrated that, although the image generated by the GAN method may have a certain degree of distortion, the use of synthetic samples here to expand the dataset helps to reduce overfitting and improve the accuracy and generalization ability of the CNN network.
The advantage of GAN is that it does not need to manufacture data manually and does not rely on users. It models all feature sources in the dataset by learning the distribution of training samples. Therefore, the GAN network has attracted extensive attention from researchers since it was proposed, and it has been improved and optimized. Jain, S. et al. [101] have trained three GAN architectures to synthetically augment the six types of typical surface defect data of pipelines. These three GAN architectures are the Deep Convolutional GAN (DCGAN) [102], the Auxiliary Classifier GAN (AC-GAN) [103], and the Information-theoretic GAN (InfoGAN) [104], respectively. Then, a CNN classification model is trained to compare the synthetic augment method with the traditional one. The comparison results show that the sensitivity and specificity of synthetically augmented CNN are 95.33% and 99.16%, respectively, which is much better than the traditionally augmented CNN. Ren, Y. F. et al. [105] combine the conditional autoencoder (CVAE) and generative adversarial networks (GAN) to reconstruct data and deal with the MFL sample missing problem. In this method, the conditional variable is added to the structure of the encoder, generator, and discriminator networks. The proposed CVAE-GAN can also generate diverse MFL signals of defects efficiently. Peng, L. S. et al. [19] establish a Wasserstein generative adversarial network (WGAN) to generate the detected signal of the pipeline. This network is trained with WGAN-enhanced datasets with 3000 epochs. The comparative test signal generated by WGAN exhibited higher quality and the same distribution as the original signal. Table 6 summarizes the existing applications for data augmentation for defect analysis in oil and gas pipelines.
In general, it is an important and inevitable development trend for applying the GAN technique to the data augmentation of MFL data in oil and gas pipelines. An efficient and large-scale data-augmentation method is essential for pipeline anomaly recognition and defect quantification based on deep learning.

8. Open Research Challenges and Future Directions

Despite showing promising results for the MFL detection and evaluation tasks based on deep-learning technologies, it still faces challenges. These are including, but not limited to, high-precision MFL signal acquisitioning, high computational costs, large training datasets, and interpretable training models. The purpose of this section is to explain the challenges and future directions of MFL detection and evaluation in oil and gas pipelines.

8.1. High-Precision MFL Signal Acquisition

Accurate and effective data labels are the basis of deep-learning training. However, the harsh detection environment inside oil and gas pipelines challenges the high-precision acquisition of MFL signals. The MFL inner detectors are evolving towards higher resolution and higher precision. Improving the sampling accuracy, reducing the sampling interval, and increasing the sampling information are the available directions, which is also an important basis for developing MFL detection and evaluation technology. At present, the sensitive magneto-optical imaging technique has been applied to provide higher spatial resolution [106,107]. However, the amount of MOI data is very large and redundant. Therefore, it is necessary to investigate how to further eliminate redundant data and retain useful information.

8.2. Low Computational Costs

A large number of parameters are needed in deep-learning training models, which requires high computational costs for the computers. Higher computational costs take more time and result in higher device costs. For example, the VDTL deep learning model proposed in [89] takes 28 h for training. However, the MFL detection and evaluation of oil and gas pipelines require strong timeliness and economy. Thus, one of the biggest challenges for researchers is to reduce the computational costs of MFL signal analysis and pipeline evaluation while maintaining high accuracy. Parsimoniously modeling and efficient calculating methods are available directions for reducing computation costs. Despite the many efforts that have been made to save computing time, such as adding the normalization layer to the training model [29,59] and reducing the number of calculation parameters [60], this challenge is still present.

8.3. Large Training Datasets

The good performance of deep-learning training requires a large number of datasets. However, pipeline MFL inspection projects require large amounts of human resources and time to obtain the sample data. Therefore, it is a challenge to obtain adequate training datasets for MFL testing technologies of oil and gas pipelines. Thus, it is necessary to further study and develop appropriate data augmentation methods for increasing the amount of MFL data to expand the number of samples required for deep learning. However, the published methods [101,105] have reached good results for MFL data augmentation. Over 1000 real MFL image samples are required for augmentation, which is still too many for the MFL inspection. Thus, more efficient data augmentation strategies are needed.

8.4. Interpretable Training Model

The black-box properties of the complex deep-learning model seriously affect its development in security applications. The interpretable training model makes it possible to visualize the relative contribution of input tokens to the final prediction results. In addition, the interpretable training model requires fewer training samples, which is quite suitable for the small datasets in oil and gas pipeline detection. At present, only a few papers investigated research on the interpretable model for MFL training. But the existing interpretable training models [17,98] are only suitable for the defect quantitation circumstances. It should be extended to more MFL detection and estimation applications. An interpretable training strategy will be a promising development direction of deep learning applied to the MFL detection and evaluation in oil and gas pipelines.

9. Conclusions

This work reviews the applications of deep learning to MFL inspection of oil and gas pipelines in detail. Since the first application of convolutional neural networks to pipeline MFL data analysis in 2017 [59], deep-learning technologies have gradually been applied to oil and gas pipeline MFL inspection. The applications have achieved remarkable results thus far. Over the past five years, deep-learning technologies have been expanded and applied to anomaly classification, defect quantification, and MFL data augmentation of oil and gas pipelines. At the same time, MFL data detection and analysis studies have provided many new solutions for deep-learning theory and technology applications, from physical interpretability to visual transformation analysis, etc. In general, deep learning has not been widely studied for MFL testing in oil and gas pipelines, and there are not many existing studies on this topic. This is mainly limited by the small-labeled datasets available.
To better apply deep learning to MFL testing in oil and gas pipelines, future developments and breakthroughs will be mainly achieved from the perspective of several factors. The first involves applying interpretable deep-learning technologies to study the network learning structures suitable for small datasets. The second involves studying appropriate methods for increasing the amount of MFL data to expand the number of samples required for deep learning. High-precision MFL signal acquisition and low computational costs are also goals that need to be pursued. We expect that further developments in deep learning will bring new changes to pipeline MFL detection and evaluation technologies. It is also expected that researchers working on pipeline MFL data analysis can inject fresh possibilities into deep-learning algorithms. This review is expected to encourage further research in the MFL detection and estimation field based on deep learning.

Author Contributions

Conceptualization, L.P.; methodology, H.S.; validation, S.H.; formal analysis, H.S.; investigation, L.P.; resources, S.H.; data curation, H.S.; writing—original draft preparation, L.P.; writing—review and editing, S.H.; visualization, H.S.; supervision, L.P. and S.L.; project administration, S.H.; funding acquisition, L.P. and S.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China (NSFC) (No. 52007088 and 52077110), and Project supported by State Key Laboratory of Power System and Generation Equipment (SKLD22M02).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Murav’eva, O.V.; Len’kov, S.V.; Murashov, S.A. Torsional waves excited by electromagnetic-acoustic transducers during guided-wave acoustic inspection of pipelines. Acoust. Phys. 2016, 62, 117–124. [Google Scholar] [CrossRef]
  2. Herdovics, B.; Cegla, F. Long-term stability of guided wave electromagnetic acoustic transducer systems. Struct. Health Monit. 2020, 19, 3–11. [Google Scholar] [CrossRef] [Green Version]
  3. Ravan, M.; Amineh, R.K.; Koziel, S.; Nikolova, N.K.; Reilly, J.P. Sizing of multiple cracks using magnetic flux leakage measurements. IET Sci. Meas. Technol. 2016, 4, 1–11. [Google Scholar] [CrossRef] [Green Version]
  4. Peng, L.S.; Huang, S.L.; Wang, S.; Zhao, W. A Simplified lift-off correction for three components of the magnetic flux leakage signal for defect detection. IEEE Trans. Instrum. Meas. 2021, 70, 6005109. [Google Scholar] [CrossRef]
  5. Wasif, R.; Tokhi, M.O.; Shirkoohi, G.; Marks, R.; Rudlin, J. Development of permanently installed magnetic eddy current sensor for corrosion monitoring of ferromagnetic pipelines. Appl. Sci. 2022, 12, 1037. [Google Scholar] [CrossRef]
  6. Duan, J.Y.; Song, K.; Xie, W.Y.; Jia, G.M.; Shen, C. Application of Alternating Current Stress Measurement Method in the Stress Detection of Long-Distance Oil Pipelines. Energies 2022, 15, 4965. [Google Scholar] [CrossRef]
  7. Ma, H.M.; Xu, Y.; Yuan, C.; Yang, Y.G.; Wang, J.H.; Zhang, T.; Li, T. Measurement characteristics of a novel microwave sensor based on orthogonal electrodes method. IEEE Sens. J. 2022, 22, 6553–6565. [Google Scholar] [CrossRef]
  8. Cataldo, A.; De Benedetto, E.; Cannazza, G.; Leucci, G.; De Giorgi, L.; Demitri, C. Enhancement of leak detection in pipelines through time-domain reflectometry/ground penetrating radar measurements. IET Sci. Meas. Technol. 2017, 11, 696–702. [Google Scholar] [CrossRef] [Green Version]
  9. Quy, T.B.; Kim, J.M. Leak detection in a gas pipeline using spectral portrait of acoustic emission signals. Measurement 2020, 152, 107403. [Google Scholar]
  10. Xu, C.; Du, S.; Gong, P.; Li, Z.; Chen, G.; Song, G. An improved method for pipeline leakage localization with a single sensor based on modal acoustic emission and empirical mode decomposition with Hilbert transform. IEEE Sens. J. 2020, 20, 5480–5491. [Google Scholar] [CrossRef]
  11. Hinton, G.E.; Salakhutdinov, R.R. Reducing the dimensionality of data with neural networks. Science 2006, 313, 504–507. [Google Scholar] [CrossRef]
  12. Ye, J.; Ito, S.; Toyama, N. Computerized ultrasonic imaging inspection: From shallow to deep learning. Sensors 2018, 18, 3820. [Google Scholar] [CrossRef] [Green Version]
  13. Liu, H.; Zhang, Y. Deep learning based crack damage detection technique for thin plate structures using guided lamb wave signals. Smart Mater. Struct. 2020, 29, 15032. [Google Scholar] [CrossRef]
  14. Luo, Q.; Gao, B.; Woo, W.; Yang, Y. Temporal and spatial deep learning network for infrared thermal defect detection. NDT E Int. 2019, 108, 102164. [Google Scholar] [CrossRef]
  15. Munir, N.; Park, J.; Kim, H.J.; Song, S.J.; Kang, S.S. Performance enhancement of convolutional neural network for ultrasonic flaw classification by adopting auto encoder. NDT E Int. 2020, 111, 102218. [Google Scholar] [CrossRef]
  16. Melville, J.; Alguri, K.S.; Deemer, C.; Harley, J.B. Structural damage detection using deep learning of ultrasonic guided waves. Am. Inst. Phys. Conf. Ser. 2018, 1949, 230004. [Google Scholar]
  17. Feng, J.; Yao, Y.; Lu, S.X.; Liu, Y. Domain knowledge-based deep-broad learning framework for fault diagnosis. IEEE Trans. Ind. Electron. 2021, 68, 3454–3464. [Google Scholar] [CrossRef]
  18. Yang, L.J.; Wang, Z.J.; Gao, S.W.; Shi, M.; Liu, B.L. Magnetic flux leakage image classification method for pipeline weld based on optimized convolution kernel. Neurocomputing 2019, 365, 229–238. [Google Scholar] [CrossRef]
  19. Peng, L.S.; Li, S.S.; Sun, H.Y.; Huang, S.L. A pipe ultrasonic guided wave signal generation network suitable for data enhancement in deep learning: US-WGAN. Energies 2022, 15, 6695. [Google Scholar] [CrossRef]
  20. Ossai, C.I. A data-driven machine learning approach for corrosion risk assessment—A comparative study. Big Data Cogn. Comput. 2019, 3, 28. [Google Scholar] [CrossRef] [Green Version]
  21. Layouni, M.; Tahar, S.; Hamdi, M.S. A survey on the application of neural networks in the safety assessment of oil and gas pipelines. In Proceedings of the 2014 IEEE Symposium on Computational Intelligence for Engineering Solutions (CIES), Orlando, FL, USA, 9–12 December 2014; pp. 95–102. [Google Scholar]
  22. Mohamed, A.; Hamdi, M.S.; Tahar, S. Using computational intelligence for the safety assessment of oil and gas pipelines: A survey. In Data Science and Big Data: An Environment of Computational Intelligence; Pedrycz, W., Chen, S.M., Eds.; Studies in Big Data; Springer: Cham, Switzerland, 2017; Volume 24, pp. 187–207. [Google Scholar]
  23. Shi, Y.; Zhang, C.; Li, R.; Cai, M.; Jia, G. Theory and Application of Magnetic Flux Leakage Pipeline Detection. Sensors 2015, 15, 31036–31055. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Feng, Q.S.; Li, R.; Nie, B.H.; Liu, S.C.; Zhao, L.Y.; Zhang, H. Literature review: Theory and application of in-line inspection technologies for oil and gas pipeline girth weld defection. Sensors 2017, 17, 50. [Google Scholar] [CrossRef] [PubMed]
  25. Vanaei, H.R.; Eslami, A.; Egbewande, A. A review on pipeline corrosion, in-line inspection (ILI), and corrosion growth rate models. Int. J. Press. Vessel. Pip. 2017, 149, 43–54. [Google Scholar] [CrossRef]
  26. Nasser, A.M.M.; Montasir, O.A.; Zawawi, N.A.; Alsubal, S. A review on oil and gas pipelines corrosion growth rate modelling incorporating artificial intelligence approach. IOP Conf. Ser. Earth Environ. Sci. 2020, 476, 012024. [Google Scholar] [CrossRef]
  27. Peng, X.; Anyaoha, U.; Liu, Z.; Tsukada, K. Analysis of magnetic-flux leakage (MFL) data for pipeline corrosion assessment. IEEE Trans. Magn. 2020, 56, 6200315. [Google Scholar] [CrossRef]
  28. Liu, Y.M.; Bao, Y. Review on automated condition assessment of pipelines with machine learning. Adv. Eng. Inform. 2022, 53, 11687. [Google Scholar] [CrossRef]
  29. Feng, J.; Li, F.M.; Lu, S.X.; Liu, J.H.; Ma, D.Z. Injurious or noninjurious defect identification from MFL images in pipeline inspection using convolutional neural network. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 12, 1137–1149. [Google Scholar] [CrossRef]
  30. Lawrence, S.; Giles, C.L.; Tsoi, A.C.; Back, A.D. Face recognition: A convolutional neural-network approach. IEEE Trans. Neural Netw. 1997, 8, 98–113. [Google Scholar] [CrossRef] [Green Version]
  31. He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1904–1916. [Google Scholar] [CrossRef] [Green Version]
  32. Callet, P.L.; Gaudin, C.V.; Barba, D. A convolutional neural network approach for objective video quality assessment. IEEE Trans. Neural Netw. 2006, 17, 1316–1327. [Google Scholar] [CrossRef]
  33. Mao, Q.; Dong, M.; Huang, Z.; Zhan, Y. Learning salient features for speech emotion recognition using convolutional neural networks. IEEE Trans. Multimed. 2014, 16, 2203–2213. [Google Scholar] [CrossRef]
  34. Swietojanski, P.; Ghoshal, A.; Renals, S. Convolutional neural networks for distant speech recognition. IEEE Signal Process. Lett. 2014, 21, 1120–1124. [Google Scholar] [CrossRef]
  35. Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep feature extraction and classification of hyperspectral images based on convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef] [Green Version]
  36. Tajbakhsh, N.; Shin, J.Y.; Gurudu, S.R.; Hurst, R.T.; Kendall, C.B.; Gotway, M.B.; Liang, J. Convolutional neural networks for medical image analysis: Full training or fine tuning. IEEE Trans. Med. Imaging 2016, 35, 1299–1312. [Google Scholar] [CrossRef] [Green Version]
  37. Chen, Y.H.; Emer, J.; Sze, V. Eyeriss: A spatial architecture for energy-efficient dataflow for convolutional neural networks. IEEE Micro 2017, 37, 12–21. [Google Scholar] [CrossRef]
  38. Xu, Z.H.; Zha, X.M.; Chen, H.G.; Sun, Y.H.; Long, M.Z. A simulation study for locating defects in tubes using the weak MFL signal based on the multi-channel correlation technique. Insight 2015, 57, 518. [Google Scholar] [CrossRef]
  39. Saha, S.; Mukhopadhyay, S.; Mahapatra, U.; Bhattacharya, S.; Srivastava, G.P. Empirical structure for characterizing metal loss defects from radial magnetic flux leakage signal. NDT E Int. 2010, 43, 507–512. [Google Scholar] [CrossRef]
  40. Mukherjee, S.; Huang, X.; Udpa, L.; Deng, Y. A kriging-based magnetic flux leakage method for fast defect detection in massive pipelines. J. Nondestruct. Eval. Diagn. Progn. Eng. Syst. 2022, 5, 011002. [Google Scholar] [CrossRef]
  41. Bhavani, N.P.G.; Senthilkumar, G.; Kunjumohamad, S.C.; Pazhani, A.J.; Kumar, R.; Mehbodniya, A.; Webber, J.L. Real-time inspection in detection magnetic flux leakage by deep learning integrated with concentrating non-destructive principle and electromagnetic induction. IEEE Instrum. Meas. Magn. 2022, 25, 48–54. [Google Scholar] [CrossRef]
  42. Huang, S.L.; Peng, L.S.; Wang, Q.; Wang, S.; Zhao, W. An opening profile recognition method for magnetic flux leakage signals of defect. IEEE Trans. Instrum. Meas. 2019, 68, 2229–2236. [Google Scholar] [CrossRef] [Green Version]
  43. Ravan, M.; Amineh, R.K.; Koziel, S.; Nikolova, N.K.; Reilly, J.P. Sizing of 3-D arbitrary defects using magnetic flux leakage measurements. IEEE Trans. Magn. 2010, 46, 1024–1033. [Google Scholar] [CrossRef]
  44. Liu, B.; Luo, N.; Feng, G. Quantitative study on mfl signal of pipeline composite defect based on improved magnetic charge model. Sensors 2021, 21, 3412. [Google Scholar] [CrossRef] [PubMed]
  45. Tang, Y.; Pan, M.; Chou, L.; Fei, L. Feature extraction based on the principal component analysis for pulsed magnetic flux leakage testing. In Proceedings of the 2011 International Conference on Mechatronic Science, Electric Engineering and Computer (MEC 2011), Jilin, China, 19–22 August 2011; pp. 2563–2566. [Google Scholar]
  46. Barajas, A.A.; Parrra, R.J.; Arizmendi, C.J. Magnetic flux leakage detection in non destructive tests performed on ferromagnetic pieces, using signal processing techniques and data mining. In Proceedings of the 2014 III International Congress of Engineering Mechatronics and Automation (CIIMA), Cartagena, Colombia, 22–24 October 2014; pp. 1–5. [Google Scholar]
  47. Yang, L.J.; Huang, P.; Gao, S.W.; Du, Z.Z.; Bai, S. Research on the magnetic flux leakage field distribution characteristics of defect in low-frequency electromagnetic detection technique. IEICE Electron. Express 2021, 18, 20200362. [Google Scholar] [CrossRef]
  48. Vapnik, N.K. The Nature of Statistical Learning Theory; Springer: Berlin, Germany, 1995; pp. 988–999. [Google Scholar]
  49. Liu, J.; Fu, M.; Wu, Z.; Su, H. An ELM-based classifier about MFL inspection of pipeline. In Proceedings of the 2016 Chinese Control and Decision Conference (CCDC), Yinchuan, China, 28–30 May 2016; pp. 1952–1955. [Google Scholar]
  50. Felzenszwalb, P.F.; Girshick, R.B.; Mcallester, D.; Ramanan, D. Object detection with discriminatively trained parted-based models. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 1627–1645. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  51. Breiman, L. Random forests. Mach. Learn. 2001, 45, 25–32. [Google Scholar]
  52. Liu, J.; Fu, M.; Tang, J. MFL inner detection based defect recognition method. Chin. J. Sci. Instrum. 2016, 37, 2572–2579. [Google Scholar]
  53. Lim, T.Y.; Ratnam, M.M.; Khalid, M.A. Automatic classification of weld defects using simulated data and an MLP neural network. Insight 2007, 49, 154–159. [Google Scholar] [CrossRef] [Green Version]
  54. Shin, H.C.; Roth, H.R.; Gao, M.C.; Lu, L.; Xu, Z.Y.; Nogues, I.; Yao, J.H.; Mollura, D.; Summers, R.M. Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE Trans. Med. Imaging 2016, 35, 1285–1298. [Google Scholar] [CrossRef] [Green Version]
  55. Cun, Y.L.; Boser, B.; Denker, J.S.; Howard, R.E.; Habbard, W.; Jackl, L.D.; Henderson, D. Handwritten digit recognition with a back-propagation network. Adv. Neural Inf. Process. Syst. 1990, 2, 396–404. [Google Scholar]
  56. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Image net classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1097–1105. [Google Scholar]
  57. Simonyan, K.; Zisseman, A. Very deep convolutional networks for large-scale image recognition. In Proceedings of the International Conference on Learning Representations 2015, San Diego, CA, USA, 7–9 May 2015; pp. 1–14. [Google Scholar]
  58. Szegedy, C.; Liu, W.; Jia, Y.Q.; Semanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. arXiv 2014, arXiv:1409.4842v1. [Google Scholar]
  59. Li, F.; Feng, J.; Lu, S.; Liu, J.; Yao, Y. Convolution neural network for classification of magnetic flux leakage response segments. In Proceedings of the 6th Data Driven Control and Learning Systems (DDCLS), Chongqing, China, 26–27 May 2017; pp. 152–155. [Google Scholar]
  60. Liu, S.C.; Wang, H.J.; Li, R. Attention module magnetic flux leakage linked deep residual network for pipeline in-line inspection. Sensors 2022, 22, 2230. [Google Scholar] [CrossRef] [PubMed]
  61. Zhang, M.; Guo, Y.; Wang, D.; He, R.; Chen, J. Diagnosis and recognition of pipeline damage defects based on improved convolutional neural network. In Proceedings of the 7th International Conference on Communication and Electronics Systems (ICCES), Coimbatore, India, 22–24 June 2022; pp. 1278–1284. [Google Scholar]
  62. Zhang, J.P.; Zhang, J.M.; Yu, S. Hot anchors: A heuristic anchors sampling method in RCNN-based object detection. Sensors 2018, 18, 3415. [Google Scholar] [CrossRef] [Green Version]
  63. Girshick, R. Fast R-CNN. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
  64. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 13278. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  65. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
  66. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. SSD: Single shot multibox detector. In Computer Vision–ECCV 2016, Proceedings of the 14th European Conference, Amsterdam, The Netherlands, 8–16 October 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 21–37. [Google Scholar]
  67. Li, X. Research on Defect Recognition Method of Pipeline Magnetic Flux Leakage Internal Detection Based on Object Detection. Ph.D. Thesis, Shenyang University of Technology, Shenyang, China, 2022. (In Chinese). [Google Scholar]
  68. Yang, L.; Wang, Z.; Gao, S. Pipeline magnetic flux leakage image detection algorithm based on multiscale SSD network. IEEE Trans. Ind. Inform. 2019, 16, 501–509. [Google Scholar] [CrossRef]
  69. Jiang, L.; Zhang, H.G.; Liu, J.H.; Shen, X.K.; Xu, H. A Multisensor cycle-supervised convolutional neural network for anomaly detection on magnetic flux leakage signals. IEEE Trans. Ind. Inform. 2022, 18, 7619–7627. [Google Scholar] [CrossRef]
  70. Jiang, L.; Zhang, H.; Liu, J.; Shen, X.; Xu, H. THMS-Net: A two-stage heterogeneous signals mutual supervision network for mfl weak defect detection. IEEE Trans. Instrum. Meas. 2022, 71, 1–9. [Google Scholar] [CrossRef]
  71. Jin, X.H.; Zhao, Y.Q. Design of power station pipeline magnetic flux leakage inspection system based on the optimization of regression estimate. Comput. Meas. Control 2014, 22, 373–384. [Google Scholar]
  72. Minkov, D.; Shoji, T. Method for sizing of 3-D surface breaking flaws by leakage flux. NDT E Int. 1998, 31, 317–324. [Google Scholar] [CrossRef]
  73. Khodayari-Rostamabad, A.; Reilly, J.P.; Nikolova, N.K.; Hare, J.R.; Pasha, S. Machine learning techniques for the analysis of magnetic flux leakage images in pipeline inspection. IEEE Trans. Magn. 2009, 45, 3073–3084. [Google Scholar] [CrossRef] [Green Version]
  74. Dutta, S.M.; Ghorbel, F.H.; Stanley, R.K. Simulation and analysis of 3-D magnetic flux leakage. IEEE Trans. Magn. 2009, 45, 1966–1972. [Google Scholar] [CrossRef]
  75. Dutta, S.M.; Ghorbel, F.H.; Stanley, R.K. Dipole modeling of magnetic flux leakage. IEEE Trans. Magn. 2009, 45, 1959–1965. [Google Scholar] [CrossRef]
  76. Priewald, R.H.; Magele, C.; Ledger, P.D.; Pearson, N.R.; Mason, S.D. Fast magnetic flux leakage signal inversion for the reconstruction of arbitrary defect profiles in steel using finite elements. IEEE Trans. Magn. 2013, 49, 506–516. [Google Scholar] [CrossRef]
  77. Ramuhalli, P.; Udpa, L.; Udpa, S.S. Neural network-based inversion algorithms in magnetic flux leakage nondestructive evaluation. J. Appl. Phys. 2003, 93, 8274–8276. [Google Scholar] [CrossRef]
  78. Chen, J.J.; Huang, S.L.; Zhao, W. Reconstruction of arbitrary defect profiles from three-axial MFL signals based on metaheuristic optimization method. Int. J. Appl. Electromagn. Mech. 2015, 49, 223–237. [Google Scholar] [CrossRef]
  79. Han, W.H.; Xu, J.; Wang, P.; Tian, G.Y. Defect profile estimation from magnetic flux leakage signal via efficient managing particle swarm optimization. Sensors 2014, 14, 10361–10380. [Google Scholar] [CrossRef] [Green Version]
  80. Chen, J.J.; Huang, S.L.; Zhao, W. Equivalent MFL model of pipelines for 3-D defect reconstruction using simulated annealing inversion procedure. Int. J. Appl. Electromagn. Mech. 2015, 47, 551–561. [Google Scholar] [CrossRef]
  81. Qiu, Z.C.; Zhang, R.L.; Zhang, W.M.; Li, L.X. Quantitative identification of microcracks through magnetic flux leakage testing based on improved back-propagation neural network. Insight 2019, 61, 90–94. [Google Scholar] [CrossRef]
  82. Feng, J.; Li, F.M.; Lu, S.X.; Liu, J.H. Fast reconstruction of defect profiles from magnetic flux leakage measurements using a RBFNN based error adjustment methodology. IET Sci. Meas. Technol. 2017, 11, 262–269. [Google Scholar] [CrossRef]
  83. Kandroodi, M.R.; Araabi, B.N.; Bassiri, M.M.; Ahmadabadi, M.N. Estimation of depth and length of defects from magnetic flux leakage measurements: Verification with simulations, experiments, and pigging data. IEEE Trans. Magn. 2016, 53, 1–10. [Google Scholar] [CrossRef]
  84. Hwang, K.; Mandayam, S.; Udpa, S.S.; Udpa, L.; Lord, W.; Atzal, M. Characterization of gas pipeline inspection signals using wavelet basis function neural networks. NDT E Int. 2000, 33, 531–545. [Google Scholar] [CrossRef]
  85. Cheng, D.; Huang, S.L.; Zhao, W.; Wang, S. Research on quantification of defects on tank floor based on particle swarm optimization-least square support vector machine. Electr. Meas. Instrum. 2019, 55, 89–92. [Google Scholar]
  86. Piao, G.Y.; Guo, J.B.; Hu, T.H.; Leung, H.; Deng, Y.M. Fast reconstruction of 3-D defect profile from MFL signals using key physics-based parameters and SVM. NDT E Int. 2019, 103, 26–38. [Google Scholar] [CrossRef]
  87. Wang, H.A.; Chen, G.M. Defect size estimation method for magnetic flux leakage signals using convolutional neural networks. Insight 2020, 62, 86–91. [Google Scholar] [CrossRef]
  88. Lu, S.X.; Feng, J.; Zhang, H.G.; Liu, J.H.; Wu, Z.N. An estimation method of defect size from MFL image using visual transformation convolutional neural network. IEEE Trans. Ind. Inform. 2019, 15, 213–224. [Google Scholar] [CrossRef]
  89. Zhang, M.; Guo, Y.B.; Xie, Q.J.; Zhang, Y.S.; Wang, D.G.; Chen, J.Z. Estimation of defect size and cross-sectional profile for the oil and gas pipeline using visual deep transfer learning neural network. IEEE Trans. Instrum. Meas. 2023, 72, 1–13. [Google Scholar] [CrossRef]
  90. Wu, Z.N.; Deng, Y.M.; Liu, J.H.; Wang, L.X. A reinforcement learning-based reconstruction method for complex defect profiles in MFL inspection. IEEE Trans. Instrum. Meas. 2021, 70, 1–10. [Google Scholar] [CrossRef]
  91. Xiong, J.Y.; Liang, W.; Liang, X.B.; Yao, J.M. Intelligent quantification of natural gas pipeline defects using improved sparrow search algorithm and deep extreme learning machine. Chem. Eng. Res. Des. 2022, 183, 567–579. [Google Scholar] [CrossRef]
  92. Caruana, R.; Lou, Y.; Gehrke, J.; Koch, P.; Sturm, M.; Elhadad, N. Intelligible models for health care: Predicting pneumonia risk and hospital 30-day readmission. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Syndey, Australia, 10–13 August 2015; ACM: New York, NY, USA, 2015; pp. 1721–1730. [Google Scholar]
  93. Vaughan, J.; Sudjianto, A.; Brahimi, E.; Chen, J.; Nair, V.N. Explainable neural network: Based on additive index models. RMA J. 2018, 101, 40–49. [Google Scholar]
  94. Yang, Z.B.; Zhang, A.J.; Sudjianto, A. GAMI-Net: An explainable neural network based on generalized additive models with structured interactions. Pattern Recognit. 2021, 120, 108192. [Google Scholar] [CrossRef]
  95. Ribeiro, M.T.; Singh, S.; Guestrin, C. “Why should I trust you?”: Explaining the prediction of any classifier. In Proceedings of the Proceedings of the 22th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; ACM: New York, NY, USA, 2016; pp. 1135–1144. [Google Scholar]
  96. Lundberg, S.; Lee, S.I. A unified approach to interpreting model predictions. Adv. Neural Inf. Process. Syst. 2017, 30, 4766–4775. [Google Scholar]
  97. Goldstein, A.; Kapelner, A.; Bleich, J.; Pitkin, E. Peeking inside the black box: Visualizing statistical learning with plots of individual conditional expectation. J. Comput. Graph. Stat. 2015, 24, 44–65. [Google Scholar] [CrossRef] [Green Version]
  98. Sun, H.Y.; Peng, L.S.; Huang, S.L.; Li, S.S.; Long, Y.; Wang, S.; Zhao, W. Development of a physics-informed doubly fed cross-residual deep neural network for high-precision magnetic flux leakage defect size estimation. IEEE Trans. Ind. Inform. 2021, 18, 1629–1640. [Google Scholar] [CrossRef]
  99. Richter, S.R.; Vineet, V.; Roth, S.; Koltun, V. Playing for data: Ground truth from computer games. arXiv 2016, arXiv:1608.02192v1. [Google Scholar]
  100. Goodfellow, I.J.; Pouget, A.; Mirza, M.; Xu, B.; Warde, D.; Ozair, S.; Courrville, A.; Bengio, Y. Geverative adversarial networks. arXiv 2014, arXiv:1406.2661. [Google Scholar]
  101. Jain, S.; Seth, G.; Paruthi, A.; Soni, U.; Kumar, G. Synthetic data augmentation for surface defect detection and classification using deep learning. J. Intell. Manuf. 2022, 33, 1007–1020. [Google Scholar] [CrossRef]
  102. Radford, A.; Metz, L.; Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv 2015, arXiv:1511.06434. [Google Scholar]
  103. Odena, A.; Olah, C.; Shlens, J. Conditional image synthesis with auxiliary classifier GANs. arXiv 2017, arXiv:1610.09585v4. [Google Scholar]
  104. Chen, X.; Duan, Y.; Houthooft, R.; Schulman, J.; Sutskever, I.; Abbeel, P. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In Proceedings of the 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain, 5–10 December 2016; pp. 1–9. [Google Scholar]
  105. Ren, Y.F.; Liu, J.H.; Zhang, J.A.; Jiang, L.; Luo, Y.H. A data reconstruction method based on adversarial conditional variational autoencoder. In Proceedings of the 2020 IEEE 9th Data Driven Control and Learning Systems Conference (DDCLS), Liuzhou, China, 20–22 November 2020; pp. 622–626. [Google Scholar]
  106. Li, Y.; Gao, X.; Zheng, Q.; Gao, P.P.; Zhang, N. Weld cracks nondestructive testing based on magneto-optical imaging under alternating magnetic field excitation. Sens. Actuators A Phys. 2019, 285, 289–299. [Google Scholar] [CrossRef]
  107. Feng, C.R.; Zhang, Z.; Bai, L.B.; Tian, L.L.; Zhang, J.; Cheng, Y.H. Study on the lowest spatial resolution of magnetic flux leakage testing for weld cracks. In Proceedings of the 2020 International Conference on Sensing, Measurement & Data Analytics in the era of Artificial Intelligence (ICSMD), Xi’an, China, 15–17 October 2020; pp. 396–400. [Google Scholar]
Figure 1. Organization of the paper.
Figure 1. Organization of the paper.
Energies 16 01372 g001
Figure 2. Basic schematic diagram of pipeline MFL testing.
Figure 2. Basic schematic diagram of pipeline MFL testing.
Energies 16 01372 g002
Figure 3. Defects of a pipeline. (a) metal loss; (b) corrosion; (c) sag; (d) weld defect.
Figure 3. Defects of a pipeline. (a) metal loss; (b) corrosion; (c) sag; (d) weld defect.
Energies 16 01372 g003
Figure 4. Welds of a pipeline. (a) girth weld; (b) spiral weld; (c) straight weld.
Figure 4. Welds of a pipeline. (a) girth weld; (b) spiral weld; (c) straight weld.
Energies 16 01372 g004
Figure 5. Special components of a pipeline. (a) flange; (b) tee; (c) small opening; (d) valve.
Figure 5. Special components of a pipeline. (a) flange; (b) tee; (c) small opening; (d) valve.
Energies 16 01372 g005
Figure 6. Process of pipeline safety assessment.
Figure 6. Process of pipeline safety assessment.
Energies 16 01372 g006
Figure 7. Structure of MFL internal detector.
Figure 7. Structure of MFL internal detector.
Energies 16 01372 g007
Figure 8. Structure of Convolutional Neural Network.
Figure 8. Structure of Convolutional Neural Network.
Energies 16 01372 g008
Figure 9. Process of the traditional pipeline anomaly recognition methods.
Figure 9. Process of the traditional pipeline anomaly recognition methods.
Energies 16 01372 g009
Figure 10. Different structures for two-stage and one-stage.
Figure 10. Different structures for two-stage and one-stage.
Energies 16 01372 g010
Figure 11. Similarity and difference between iteration-based method and RL-based method.
Figure 11. Similarity and difference between iteration-based method and RL-based method.
Energies 16 01372 g011
Figure 12. Four traditional data-augmentation operations for MFL image.
Figure 12. Four traditional data-augmentation operations for MFL image.
Energies 16 01372 g012
Figure 13. The basic framework of GAN.
Figure 13. The basic framework of GAN.
Energies 16 01372 g013
Table 1. List of acronyms.
Table 1. List of acronyms.
AcronymMeaningAcronymMeaning
AC-GANAuxiliary classifier generativeI-CNNImproved convolutional neural network
AEAcoustic emissionInfoGANInformation-theoretic generative
AIArtificial intelligenceLDALinear discriminant analysis
ALEAccumulated local effects plotLIMELocal interpretable model-agnostic explanations
ANNArtificial neural networkLRNLocal response normalization
BPBack propagationMAEMean absolute error
CNNConvolutional neural networkMAPEMean average precision error
CsCNNCycle-supervised convolutional neural networkMDMMagnetic dipole model
CVAECombine the conditional autoencoderMFLMagnetic flux leakage
DAR-SSDDilated attention residual-single shot multibox detectorMK-MMDMultikernel maximum mean discrepancy
DCDirect currentMOIMagneto-optical imaging
DCGANDeep convolutional generative adversarial networkMSEMean squared error
DELMDeep extreme learning machineNDTNondestructive testing
DfedResNetDoubly fed cross-residual networkPCAPrincipal component analysis
DK-DBLFDomain-knowledge-based deep-broad learning frameworkPSOParticle swarm optimization
DLDeep learningRBFRadial basis function
DPMDeformable part modelRCNNRegion-based convolutional neural network
DSSDDeconvolutional single shot detectorReLURectified linear unit
EBMExplainable boosting machineRLReinforcement learning
ECEddy currentRMSERoot mean square error
ELMExtreme learning machineROIRegion of interest
EUGWElectromagnetic ultrasonic guided waveRSSDRainbow single shot multibox detector
FEMFinite element modelSHAPShapley additive explanations
FLFuzzy logicSSC-CNNSparse selfcoding-based convolutional neural network
FSSDFeature fusion single shot multibox detectorSSDSingle shot multibox detector
GAGenetic algorithmSVMSupport vector machine
GAMxNNExplainable neural network based on generalized additive modelTHMS-NetTwo-stage heterogeneous signals mutual supervision network
GANGenerative adversarial networkVT-CNNVisual transformation convolutional neural netwok
GAMIGeneralized additive models with structured interactionsVGGVisual geometry group
ISSAImproved sparrow search algorithmWGANWasserstein generative adversarial network
ICAIndependent component analysis adversarial networkYOLOYou only look at one sequence
Table 2. Summary of the related works.
Table 2. Summary of the related works.
SurveyYearScope of the Architecture SurveyedContributions and
Limitations
Anomalies RecognitionObject
Detection
Defect Qu-
Antification
MFL Data
Augmentation
[21]2014××Survey of the safety assessment of oil and gas pipelines based on artificial neural networks
[23]2015×××Survey of the principle, measuring methods, and quantitative analysis algorithms of MFL testing
[22]2017××Survey of the defect detection, estimation, and classification methods based on computational intelligence
[24]2017××Survey of the inspection principle, weld defect identification, and quantification methods based on in-line inspection technologies
[25]2017××××Survey of the pipeline corrosion, in-line inspection, and corrosion growth rate models
[26]2020××××Survey of the prediction methods of the corrosion growth rate of oil and gas pipelines based on ANN and FL
[27]2020×××Survey of the MFL signal processing, defect characterization, data matching, and growth prediction in pipelines
[28]2022××Survey of the advances in pipeline condition assessment using machine learning methods
This survey2023Survey of the applications of deep learning in oil and gas pipelines MFL detection and evaluation.
Table 3. Summary of the application of deep learning for pipeline anomaly recognition.
Table 3. Summary of the application of deep learning for pipeline anomaly recognition.
Ref.ModelDataset SizeObjectivePerformance MetricValue
[59]CNN24,000 samplesClassify the defects, tees, and cathodic protections in the pipelineAccuracy0.974
[29]AlexNet28,500 samplesClassify the injurious and noninjurious defects from the MFL imagesAccuracy0.983
[60]VGG16Not notedClassify the anomalies of the pipeline, including welds, tees, flanges, and corrosionsAccuracy0.977
[18]SSC-CNN2000 samplesClassify the girth welds and spiral welds from pipeline MFL signal images.Accuracy0.951
[61]I-CNN1000 samplesClassify the anomalies of the pipeline, including spiral weld, defective spiral weld, and ring weldAccuracy0.974
Table 4. Summary of the application of deep learning for pipeline object detection.
Table 4. Summary of the application of deep learning for pipeline object detection.
Ref.ModelDataset SizeObjectivePerformance MetricValue
[67]YOLOv51400 samplesIdentify and locate the defects with different types from the MFL signal of the pipelineMAPE0.928
[68]DAR-SSD2000 samplesLocate and classify the defects, girth weld, and spiral weld from the MFL image of pipelineAccuracy0.953
[69]CsCNN47,952 samplesIdentify and locate the pipeline anomaly without supervision and prior informationPrecision0.935
[70]THMS-Net500 samplesIdentify and locate the defects with different sizes from the MFL signal of the pipelineAccuracy0.949
Table 5. Summary of the application of deep learning for pipeline defect quantification.
Table 5. Summary of the application of deep learning for pipeline defect quantification.
Ref.ModelDataset SizeObjectivePerformance MetricValue
[87]CNN4065 samplesIdentify and quantify the defects with different types from the MFL signalLength error band±2 mm
Width error band±2 mm
Depth error band±5%
[88]VT-CNN30,000 samplesEstimate the defect size from MFL imagesLength mean error2.1 mm
Width mean error3.3 mm
Depth mean error2.6%
[89]VDTL16,000 samplesEstimate the defect size and cross-sectional profile for the oil and gas pipelineLength prediction error0.67 mm
Depth prediction error0.97%
Profile prediction error2.67%
[90]RL6000 samplesReconstruct the complex depth profile of defectsPDE of defect depth2.94%
PDE of defect profile89.2%
[91]ISSA-DELM325 samplesIntelligent quantitative evaluation of natural gas pipeline defectsRMSE of defect depth0.120
MAE of defect depth0.101
MAPE of defect depth6.38%
[98]DfedResNet3321 samplesEstimate the defect size from three-dimensional MFL imagesLength prediction error0.173 mm
Width prediction error0.206 mm
Depth prediction error0.317%
[17]DK-DBLF21,000 samplesEstimate the length, width, and depth of pipeline defects via MFL measurementsLength prediction error5.0 mm
Width prediction error6.9 mm
Depth prediction error0.89%
Table 6. Summary of the application of deep learning for data augmentation.
Table 6. Summary of the application of deep learning for data augmentation.
Ref.ModelDataset Size after AugmentationObjectivePerformance MetricValue
[101]DCGAN3600 samplesData augmentation for defect detection and classification using deep learningSensitivityDC: 95.33%
AC: 92.28%
AC-GANInfo: 94.06%
SpecificityDC: 99.16%
InfoGANAC: 98.56%
Info: 98.81%
[105]CVAE-GAN3500 samplesReconstruct the missing MFL samples and augment diverse defect sampleRMSE0.05
[19]US-WGAN20,000 samplesData enhancement for the detection classification of oil and gas pipelineTrain accuracy98.8%
Test accuracy98.2%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, S.; Peng, L.; Sun, H.; Li, S. Deep Learning for Magnetic Flux Leakage Detection and Evaluation of Oil & Gas Pipelines: A Review. Energies 2023, 16, 1372. https://doi.org/10.3390/en16031372

AMA Style

Huang S, Peng L, Sun H, Li S. Deep Learning for Magnetic Flux Leakage Detection and Evaluation of Oil & Gas Pipelines: A Review. Energies. 2023; 16(3):1372. https://doi.org/10.3390/en16031372

Chicago/Turabian Style

Huang, Songling, Lisha Peng, Hongyu Sun, and Shisong Li. 2023. "Deep Learning for Magnetic Flux Leakage Detection and Evaluation of Oil & Gas Pipelines: A Review" Energies 16, no. 3: 1372. https://doi.org/10.3390/en16031372

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop