Next Article in Journal
Analysis and Description of Key Technologies of Intelligent Energy System Integrated with Source-Grid-Load-Storage in the Oil Field
Next Article in Special Issue
Static Characteristics of Friction Block Teeth of Coiled Tubing Drilling Robot
Previous Article in Journal
Effects of Different Cooking Methods on Glycemic Index, Physicochemical Indexes, and Digestive Characteristics of Two Kinds of Rice
Previous Article in Special Issue
A Fault Diagnosis Method for Drilling Pump Fluid Ends Based on Time–Frequency Transforms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Three-Dimensional Reconstruction Technology for the Defect Inspection of Tubing and Casing

1
School of Mechatronic Engineering, Southwest Petroleum University, Chengdu 610500, China
2
Oil and Gas Equipment Technology Sharing and Service Platform of Sichuan Province, Chengdu 610500, China
3
China National Quality Inspection and Testing Center of Oil Tubular Goods, Xi’an 710077, China
4
CNPC Tubular Goods Research Institute, Xi’an 710077, China
5
Xi’an Hypervision Technology Co., Ltd., Xi’an 710100, China
*
Authors to whom correspondence should be addressed.
Processes 2023, 11(7), 2168; https://doi.org/10.3390/pr11072168
Submission received: 7 June 2023 / Revised: 14 July 2023 / Accepted: 14 July 2023 / Published: 20 July 2023
(This article belongs to the Special Issue New Research on Oil and Gas Equipment and Technology)

Abstract

:
The three-dimensional reconstruction of high-gloss/reflection and low-texture objects (e.g., oil casing threads) is a complex task. In this paper, we present a novel approach that combines convolutional neural networks (CNNs) and multi-layer perception (MLP) with traditional three-dimensional reconstruction methods, thereby enhancing the detection efficiency. Our method utilizes a dataset of 800 samples that includes a variety of thread defects to train a U-net-like model as a three-dimensional reconstructor. Then, an MLP model is proposed to improve the accuracy of the three-dimensional reconstructed thread profile to the level of three-coordinate measurements through a regression analysis. The experimental results demonstrate that the method can effectively detect the black-crested threads of oil casing threads and quantify their proportions in the entire sample for accurate quality assessment. The method is easy to operate and can detect black threads effectively, providing a powerful tool for oil companies to ensure exploration benefits.

1. Introduction

Three-dimensional reconstruction technology transforms real-world scenes into mathematical models that conform to logical computer representations through the process of depth data acquisition, preprocessing, point cloud registration and fusion, and surface generation, among other things. Such models can be used to aid research, such as heritage conservation, game development, architectural design, clinical medicine, etc. Scholars have conducted extensive research on the three-dimensional reconstruction model [1,2,3,4,5,6,7], such as an improved omnidirectional Laplacian operator with an adaptive window [2], a nonsubsampled wavelet transform (NSWT)-based three-dimensional reconstruction [3], a precision enhancement three-dimensional shape reconstruction with structured light and a deep convolutional neural network (CNN) [5], an intensity adaptive adjustment method to eliminate overshoot-induced saturated pixels [6], and a polarization-based three-dimensional imaging model for multi-color Lambertian object three-dimensional geometry restoration [7].
The tubing and casing are essential components in oil and gas exploration and development. Generally, the connections of the tubing and casing have tapered threads, and unless special design requirements are met, incomplete crest threading may occur when retracting the thread cutter. According to API Spec 5B, black-crested threads are characterized by the absence of the original mill surface [8], as shown in Figure 1. Qualitative and quantitative requirements for determining the quality of the tubing and casing are also discussed.
For round-threaded tubing and casing, black-crested threads are not permitted within the minimum length of full-crest threads from the end of the pipe (Lc). For a buttress thread casing, within the Lc length, as many as two threads showing the original surface of the pipe on their crests for a circumferential distance not exceeding 25% of the pipe circumference is permissible, and the remaining threads in the Lc thread length shall be full-crested threads. The threads within the Lc area that are not fully crested or still show the original surface diameter of the pipe or upset surface shall not be made to appear full-crested either mechanically or by hand [9,10].
The black-crested thread is a type of defect. Usually, inspectors will pick out the black-crested thread using manual naked-eye observation. According to statistics, more than 15% of the waste parts produced in the production process of oil pipes and casing pipes are caused by the insufficient minimum length of the full-crest thread, that is, the black-crested thread exceeding the standard. Manual observation is not only demanding on the eyesight of workers, but also the quantitative analysis of the black-crested thread often relies on the subjective judgement of people, which results in a high misjudgment rate and low efficiency for the enterprise.
This paper presents a new detection method for black-crested thread detection. This method uses white light illumination, image superposition, depth processing, and other imaging techniques to analyze the texture characteristics of the thread surface quickly and accurately. It provides an effective solution for better enterprise production informationization and product quality control.

2. Theory and Methods

In this section, we detail how existing 3D reconstruction techniques can be improved to effectively detect black-crested threads in oil pipes and casings. We integrated deep learning into the traditional 3D reconstruction technology, as shown in Figure 2. First, we reconstructed the object using a novel 3D reconstruction technology. Secondly, the surface of the reconstructed object was spliced using the point cloud registration method. Thirdly, the point cloud image was recovered using deep learning, and this led to point cloud surface reconstruction. Finally, deep learning was adopted to detect the object. In the following section, we explain the specific implementation process.

2.1. High-Precision Three-Dimensional Reconstruction Technology with AI

The three-dimensional reconstruction of the measured object is a prerequisite for defect detection. In general, traditional methods [11,12,13,14,15,16,17,18] based on 2D images have difficultly achieving the detection of small defects. Therefore, this subsection describes how to perform high-precision 3D reconstruction of a measured object.
Specifically, the 3D reconstruction method based on optical information modeling was developed and applied to conduct the 3D reconstruction of the object, as presented in Figure 3. Firstly, the object was imaged with a monocular camera. Then, by combining the optical properties and the convolutional neural network, the global contour and the local detail of the object within the range of camera coverage were recovered in depth. In this case, the method could realize the high-precision 3D reconstruction of the object.
In this paper, we developed a single-view 3D reconstruction method using defocus cues. Specifically, a stack of images was captured by moving the camera from near to far. In this process, the sharpness of each point varies according to the Circle of Confusion (CoC) curve, which is given by
C o C = d 0 F d 0 f 2 N F f
where d 0 is the distance from the object to the lens, F refers to the focus distance, f denotes the focal length of the lens, and N signifies the f-number. Given a camera, the CoC curve represents a function of the defocus variance according to d 0 . Reversely, the distance from the object to the lens, i.e., d 0 , can be estimated by the CoC, whose value reflects the defocus of each point.
In our work, we proposed the application of a convolutional neural network to estimate the defocus value of each point in the image. Specifically, a network similar to U-Net [19,20] was selected in this part, which took each image as the input and the corresponding defocus map was the output. Notably, the size of the defocus map was the same as the size of the image since the defocus was computed for each point. The defocus network followed an encoder–decoder architecture. The encoder contained four down-sampling layers to reduce the image size. During down-sampling, the network learned shallow features from the image. In each layer, two convolutional kernels and a 2 × 2 Maxpool operation were adopted. After passing four down-sampling layers, the image was reduced to 1/16. Then, four up-sampling layers were adopted to estimate the defocus map. The up-sampling layers were symmetric with the down-sampling layers. In this case, the defocus map is the same size as the original image.
In addition, considering the presence of a large amount of information in the image, such as color information, texture information, and illumination information, which are all essential for the 3D reconstruction task, in this case, we explored different color spaces to capture the color and illumination information, such as gray and HSV. We also conducted a canny edge detector to extract the texture information. All this information was formulated as a matrix the same size as the image. The information was integrated from the channel dimension of the image. Under this circumstance, the input to the defocus net contained 8 channels, including RGB, HSV, gray, and edge, and the output was in one channel, i.e., defocus value.
After the defocus map was obtained, it was utilized to reconstruct the 3D surface. Specifically, another U-Net-like network was adopted as the 3D reconstructor. It followed similar architecture to the defocus network, as aforementioned. It took the defocus maps of the focal stacks as the input and output a depth map. Notably, the input was obtained by integrating the defocus map over the channel dimension. In this case, the input channel was determined by the number of images in the focal stack. The output was the single-channel depth.
It is difficult to monitor the defocus network as the defocus values do not have ground truth. In the optimization process, only the depth map was supervised. The loss function is formulated as follows:
L = i = 1 n I d I d g t 2 2 ,
where L denotes the loss in the optimization process, n refers to the number of samples in the training set, I d and I d g t stand for the estimated depth map and ground truth, respectively, and 2 represents the L2 norm. The proposed method was operated on the PyTorch platform for training and testing. In practice, to achieve satisfactory results, 500 focal stacks were utilized for training to achieve a satisfactory result. In practice, we used a sample size of 800 to train our deep learning model to achieve a satisfactory result. The learning rate was fixed as 1 × 10−4, the decay rate was set as 0.1, and the total number of training epochs was 1000. The inference time for depth estimate was more than 16 fps on an Nvidia 2080ti. Compared with the existing 3D reconstruction methods, the depth learning method proposed in this paper features the advantages of high accuracy, fast speed, wide adaptability, and good robustness.

2.2. Three-Dimensional Point Cloud Registration Technology

The field of view of each detection was small, and there were overlapping areas between each two detections. To improve the efficiency and accuracy of the object detection, there is a need to use point cloud alignment algorithms [21,22] to generate a complete three-dimensional scene of the object. The traditional iterative nearest point (ICP) algorithm is an optimal registration algorithm based on the least square method. It is sensitive to the initial position of the registration point cloud and noise points, which can cause the iteration convergence to a wrong local minimum. This technology intends to use the point cloud registration algorithm [23] based on end-to-end learning, which is insensitive to initialize and robust. The algorithm architecture is as follows.
Firstly, we extracted features from the source point cloud. Specifically, for a point x c from the point cloud, its neighborhood is denoted as N x c . Thus,
F x c = f θ x c , Δ x c , i , P P F x c , x i .
In which f θ represents the neural network, θ denotes the parameter to be learned, and x i N x c ,   Δ x c , i = x i x c ,   P P F x c , x i are the 4D features, whose composition is as follows:
P P F x c , x i = n c , Δ x c , i , n i , Δ x c , i , n c , n i , Δ x c , i 2 ,
where signifies the angle and n c , n i are the normal line of the points x c , x i , respectively. The PPF feature is a 4-dimensional feature, so for a particular i , x c , Δ x c , i , P P F x c , x i is a 10-dimensional vector. Batch processing was performed through multiple layers of perceptron (MLP) given shared parameters. The data features could be obtained by maximum pooling and MLP. Finally, the features were normalized to obtain features, F x c .
Secondly, we executed the parameter prediction module. Unlike the traditional manual modulation parameters, this technology is designed with the network to predict the algorithm parameters α and β . The source point cloud X ( J , 3 ) and the target point cloud Y ( K , 3 ) were taken as inputs, and the entirety was ( J + K , 3 ) after splicing. To distinguish the categories of the input datasets, the fourth feature was added. The input data of the ultimate parameter prediction module were the ( J + K , 4 ) dimension. To ensure that the network prediction parameters were all positive, the softPlus function was used as the activation function.
Thirdly, we computed the matching matrix. Two feature matrices F X J × C and F Y K × C were obtained by the feature extraction module. The corresponding parameters α and β were obtained through the parameter extraction module. The matching matrix was then used to correspond to the relationship between points. In the matching matrix M J × K , the initial value m j k of M is
m j k = e β ( F x j F y k 2 α ) .
The matching matrix M = 0 , 1 J × K represents the corresponding value of the assignment point, where each element is
m j k = 1 i f x j   corresponds   t o   y k 0 otherwise
Fourthly, we calculated the rotation and the translation matrix R and T were computed. For the point x j of the point cloud X , the corresponding points were constructed as follows:
y j = 1 k = 1 K m j , k k = 1 K m j , k y k
The singular value decomposition (SVD) was utilized to solve R and T. Not all x j had corresponding points, thus the power weight w j = k = 1 K m j , k was set to deal with the relationship of corresponding points to punish the loss caused by different confidence levels.
Finally, we gave the expression of the loss function. The loss function was defined as the distance between the point cloud transformed by the real transformation matrix R g t , t g t and the point cloud transformed by the prediction transformation matrix R p r e d , t p r e d of the source point cloud, i.e.,
r e g = 1 J j J R g t x j + t g t R p r e d x j + t p r e d .
Then, the matching matrix constraint was added to avoid the network marking most points as outliers, namely
i n l i e r = 1 J j J k K m j k 1 K k K j J m j k .
Therefore, the ultimate loss function is defined as
t o t a l = r e g + λ i n l i e r ,
where λ is the regularization coefficient. Figure 4 presents the processing flow of the 3D registration technology.

2.3. Surface Reconstruction of Large-Scale Point Clouds

The reconstruction of trigonometric mesh surfaces from 3D point clouds is an important basis for modeling 3D objects. In this subsection, the Delaunay triangulation surface reconstruction network based on deep learning was adopted [24]. The algorithm could not only effectively recover the geometric details with complex topological structures, so as to obtain a 3D model of the object with high integrity, but is also suitable for the surface reconstruction of large-scale point cloud data.
Compared with the traditional Delaunay triangulation surface reconstruction method based on graph segmentation, in this section, a deep learning network was utilized to predict the inner and outer labels of the Delaunay tetrahedron directly from the point cloud and its Delaunay triangulation, without needing the visibility information of the point cloud. It avoids the problem of complex multi-layer surfaces due to insufficient visibility information. It can be applied to any point cloud without visibility information. The specific algorithm flow is as follows:
(1) Geometry feature extraction
Local K-neighbors were applied to encode local geometric features for each point. In K-neighbors, each point has a tangent plane perpendicular to its normal. The symbolic distance between the current point and these k planes was calculated. The point cloud normal was decomposed into the relative normal of the neighborhood point tangent plane as the input, which could provide richer local geometric information of the surface.
(2) Feature augmented graph
After extracting the features of the points, they were aggregated into the graph model nodes after Delaunay triangulation. The feature-augmented graph model was constructed. The nodes and edges of the graph corresponded to tetrahedra and triangular surfaces between adjacent tetrahedra, respectively. The features of the tetrahedron were constructed from the features of the four vertices of a tetrahedron by an attention mechanism. The original tetrahedral features in the graph model constructed by clustering geometric features could encode the internal and external information of the tetrahedron. Then, the multi-layer graph convolutional network was applied to the graph to integrate more local graph structure constraints.
(3) Multiple-label supervision
The multi-label supervision strategy can train high-quality models without tetrahedral ground truth labels or visibility information of the tetrahedral surface. A smoother surface can be established using neighborhood consistency loss as a regularization loss to calculate the classification loss function.

2.4. Accuracy Improvement of Thread Shape in Three-Dimensional Reconstructed Model

Considering that detecting black-crested threads requires high-precision thread shape curves, in this section, a method based on MLP to improve the accuracy of the thread shape curve in the 3D reconstruction model is discussed.
First of all, a mathematical model for thread shape has to be established. Due to the complexity of the tooth shape of the tapered thread, it is difficult to find a definite function relationship to describe it. Considering the similarity of its shape to a square wave in signal processing, the Fourier transform method was applied to establish an analytical model of the thread tooth shape.
Fourier transform is a linear integral transform that is widely used in physics, engineering, information science, mathematics, signal processing, statistics, cryptography, and other fields [25]. Any function f(t) with a period of T can be represented by an infinite series composed of sine and cosine functions, that is,
f t = a 0 2 + n = 1 a n cos n ω t + b n sin n ω t
where a0 is a constant term, ω is the frequency, and an and bn denote Fourier coefficients, which can be represented as
a n = 2 T T / 2 T / 2 f t cos n ω t d t b n = 2 T T / 2 T / 2 f t sin n ω t d t
According to Euler’s formula, the sine and cosine functions can be written in the form of complex numbers, and substituting into Equation (11) gives
f t = a 0 2 + n = 1 1 2 a n e i n ω t + a n e i n ω t + b n i e i n ω t b n i e i n ω t   = c 0 + n = 1 c n e i n ω t + d n e i n ω t
where
c 0 = a 0 2 ,   c n = a n i b n 2 ,   d n = c ¯ n = a n + i b n 2
Substituting Equation (13) into Equation (14) gives
c n = 1 T T / 2 T / 2 f t e i n ω t d t d n = 1 T T / 2 T / 2 f t e i n ω t d t
Substituting Equation (15) into Equation (13) after unification gives
f t = n = + C n e i n ω t C n = 1 T T / 2 T / 2 f t e i n ω t d t
In Equation (15), n = 0, 1, 2, …, where Cn is the continuous Fourier transform (CFT) of the function f(t) and Cn constitutes the spectrum of the original function f(t). In practice, due to the limitations of measurement equipment, the acquired signals are often discrete sequences. For a discrete signal sequence of length N, Equation (16) becomes
x n = 1 N k = 0 N 1 X k e i k ω n X k = n = 0 N 1 x n e i k ω n
Equation (17) is the basic equation of the discrete Fourier transform: Xk represents the discrete Fourier transform (DFT) of the sequence x(n) and Xk constitutes the spectrum of the original sequence x(n).
Since the projection curve of the API oil round pipe thread tooth shape in the longitudinal direction of the pipe body is periodic, the method of discrete Fourier transform is applicable, which can be applied to decompose it in the frequency domain space to study the periodicity of the tooth shape projection curve. Next, the analytical formula of the tooth shape curve of the API oil pipe round thread was derived based on the discrete Fourier transform method.
Equation (17) can be further rewritten into the form of sine and cosine functions to obtain
x n = k = 0 N / 2 R k cos 2 π k n N + k = 0 N / 2 I k sin 2 π k n N
According to the trigonometric function relationship, Equation (18) can be further written as
F = k A k sin 2 π ω k x + φ k = k A k sin 2 π k L x + φ k
where L is the total length of the thread sampling and satisfies L = N·h = N/fs, h denotes the sampling interval, fs is the sampling frequency, x is the tooth shape coordinate, k = k/L is the tooth shape characteristic frequency, and Ak and k are the amplitude and phase corresponding to k, respectively.
According to Equation (20), the spectrum and phase spectrum of the thread can be represented as
A k = F s ω k φ k = Ψ ω k
For tapered threads, their spectrum and phase spectrum can be decomposed into a thread spectrum describing the tooth shape and the taper spectrum describing the taper, that is,
F s = F t + F r Ψ s = Ψ t + Ψ r
where Ft is the taper spectrum, Fr denotes the thread spectrum, t is the taper phase spectrum, and r signifies the thread phase spectrum.
The ANN-based regression method was adopted to correct the error in the 3D model and achieved high-precision reconstruction. This method uses a neural network to perform regression analysis on the thread contour in the 3D model so that the error between it and the CMM results can be minimized.
The first step is to prepare the dataset. First, we measured the thread joint with white light according to our method, generated a 3D model, and sampled the tooth shape along the generatrix direction. The sampling interval was one pitch, and we obtained the discrete point set of each complete thread tooth shape and performed a discrete Fourier transform on it according to Equation (18) to obtain the thread spectrum and phase spectrum. Subsequently, they were substituted into Equation (20) to obtain the analytical model of the thread tooth shape. A set of equidistant coordinates xi = xi−1 + Δx was substituted into Equation (20) to finally obtain the analytical thread tooth contour curve as a training sample. The 3D model was sampled and analyzed along the circumferential direction with a sampling step size of 1° to form a training sample set. Then, we measured and analyzed each threaded tooth of the joint using a CMM in the same way as a label sample. Finally, the training samples and the labeled samples of the same threaded teeth were corresponded one by one to obtain the dataset for training the MLP model [26]. In this paper, the capacity of the dataset obtained by sampling and analyzing a joint was about 3600, and the shape of each sample (in the format of the PyTorch or PaddlePaddle framework) was (1, 1, 1, 256). The dataset was divided into training, validation, and test sets at a ratio of 7:2:1. The process and MLP model of the method are shown in Figure 5.
The second step was to build a neural network model and train it. We first built a fully connected network with 3 hidden layers to perform regression analysis on the thread contour of the 3D model, and the error was evaluated using the MSE loss function, which in our study can be written as
L = 1 m i F F 2
where m denotes the number of samples and F and F′ represent the analytical curves of the thread contour of the 3D model and the three-coordinate measurement, respectively. We trained the model using a gradient-descent-based approach [26], choosing the AdamW optimizer [27] with a learning rate of 0.0001, a batch size of 10, a dropout probability of 0.2, and a loss threshold of 1 × 10−4.

3. The Defect Inspection of the Black-Crested Thread

For the detection of black-crested threads, judgement can be made in two ways. On the one hand, the rolled black surface and the cut surface are distinguished on the basis of different color differences. On the other hand, black-crested threads are mostly found on the last few threads of the whole thread (meaning the big end). To make the thread look like they do not have black threads, factories sometimes remove the rolled black surfaces to make them look like a full thread. Therefore, it is necessary to measure the height of the threads to determine if they are black-crested threads. Generally speaking, there are two main ideas for measuring black threads using white light. On the one hand, the black-crested threads can be qualitatively identified through three-dimensional reconstruction (color difference). On the other hand, an accurate reconstruction algorithm detects whether the thread height of a large-end back thread that may be polished is less than 1 mm ± 0.03 mm. If so, the part is also a black-crested thread (polished).
The inspection standard for oil casing threads is that two-tooth black-crested threads are permitted within the Lc length range, provided that the black threads do not exceed 25% of the pipe circumference. All other threads within the Lc length should be full-crest threads. In this section, we study and analyze the oil casing BGT2 4-1/2. It had an allowed thread depth tolerance of ±0.03 mm of the thread depth. Further, the height of the qualified thread was 1 mm ± 0.03 mm.
Specifically, Figure 6, Figure 7 and Figure 8 show the three-dimensional reconstruction of the oil casing thread. This indicates that we can effectively reconstruct the oil casing thread in three dimensions and perform defect detection throughout the entire oil casing. Figure 6 displays an obvious black-crested thread, Figure 7 shows the less-obvious black-crested thread, and Figure 8 depicts the qualified thread. Differences can be observed in the reconstructed thread height.
Predictions of the trained MLP model for thread profiles are shown in Figure 9. It can be seen that the output of the model results is in good agreement with the three-coordinate measurement results, indicating that the thread contour measurement results obtained by our method can achieve the same accuracy as the coordinate measurement. Based on the obtained high-precision thread shape curve, black-crested threads can be detected by analyzing the height deviation of the thread. In Figure 9, the measured height of the oil casing falls within the acceptable range, namely 1 mm ± 0.03 mm.
In addition, the consistency of the oil casing thread height values displayed in Table 1 also demonstrates the effectiveness of the proposed technology. Therefore, it is possible to quickly and effectively detect the black-crested threads of the oil casing threads and record their proportion in the whole oil casing threads, so as to judge whether the oil casing threads are qualified or not. Further, according to the novel three-dimensional reconstruction technology, one can determine whether the entire oil casing qualifies for the eligibility criteria.
In a word, the above data indicate that by integrating deep learning technology into high-precision three-dimensional reconstruction technology, the technique is able to effectively detect the black-crested threads of oil casing threads.
In addition, the entire oil casing connection with black-crested threads, which is obtained from the novel three-dimensional reconstruction technology, is shown in Figure 10. We can observe that the black-crested thread is mainly due to the thread in the local processing volume being seriously insufficient, resulting in its poor finish. According to our detection, the minimum length of the full-crest thread Lc is 35.8 mm. There are two black-crested threads in the Lc range, and their lengths are 22.3 mm and 40.5 mm, respectively. Furthermore, the circumference of the oil casing is 358.9 mm, 25% of which is 89.7 mm. The oil casing is qualified because the length of the black-crested thread is less than 25% of the circumference of the oil casing.

4. Conclusions

In our study, we integrated deep learning into a high-precision three-dimensional reconstruction technology and developed a detecting technology for black-crested threads. We used a sample size of 800 for training our U-net-like model as the three-dimensional reconstructor. An MLP model was proposed to improve the accuracy of the three-dimensional reconstructed thread profile to the level of three-coordinate measurements. With a high-precision thread shape model, the black-crested thread can be detected by measuring the height of the thread.
Our experimental results demonstrated that the novel method effectively detects the black-crested thread of oil casing threads and captures its proportion in the entire sample to accurately assess their quality. The minimum length of the full-crest thread Lc is 35.8 mm. There are two black-crested threads in the Lc range, and their lengths are 22.3 mm and 40.5 mm, respectively. Furthermore, the circumference of the oil casing is 358.9 mm, 25% of which is 89.7 mm. The oil casing is qualified because the length of the black-crested thread is less than 25% of the circumference of the oil casing.
Although our model performs well in detecting the black-crested thread, its ability to generalize may be impacted by other types of threads or defects. In practical applications involving different types of threads and defects, further tuning and training of the model may be required. While the CNN model utilized in this study proves its effectiveness, it is important to note that other more advanced algorithms for image intelligence, such as Transformer or DETR, may yield better performance. Therefore, it is vital that more research and development is conducted to validate and expand the potential of this innovative technology, and future advancements can enhance the efficiency and accuracy of black-crested thread detection in tubing and casing threads, ultimately benefiting the exploration efforts of oil companies.

Author Contributions

Conceptualization, Z.H. and X.B.; methodology, X.B.; software, S.S.; validation, Z.Y., N.F. and Y.A.; formal analysis, L.X.; investigation, L.X.; resources, Z.C.; data curation, Z.C.; writing—original draft preparation, X.B.; writing—review and editing, X.B.; visualization, X.B.; supervision, Z.H.; project administration, Z.H.; funding acquisition, Z.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Natural Science Foundation of China (No: 51974272).

Data Availability Statement

Data available on request from the authors. The data that support the findings of this study are available from the corresponding author.

Conflicts of Interest

The authors declare that they have no conflict of interest.

References

  1. Qian, N.; Lo, C.-Y. Optimizing camera positions for multi-view 3D reconstruction. In Proceedings of the 2015 International Conference on 3D Imaging (IC3D), Liege, Belgium, 14–15 December 2015. [Google Scholar]
  2. Tian, Y.Z.; Hu, H.J.; Cui, H.Y.; Yang, S.C.; Qi, J.; Xu, Z.M.; Li, L. Three-dimensional surface microtopography recovery from a multifocus image sequence using an omnidirectional modified Laplacian operator with adaptive window size. Appl. Opt. 2017, 56, 6300–6310. [Google Scholar] [CrossRef] [PubMed]
  3. Tian, Y.Z.; Cui, H.Y.; Pan, Z.Y.; Liu, J.R.; Yang, S.C.; Liu, L.L.; Wang, W.B.; Li, L. Improved three-dimensional reconstruction algorithm from a multifocus microscopic image sequence based on a nonsubsampled wavelet transform. Appl. Opt. 2018, 57, 3864–3872. [Google Scholar] [CrossRef] [PubMed]
  4. Chen, X.; Wu, Q.; Wang, S. Research on 3D Reconstruction Based on Multiple Views. In Proceedings of the 2018 13th International Conference on Computer Science & Education (ICCSE), Colombo, Sri Lanka, 8–11 August 2018. [Google Scholar]
  5. Nguyen, H.; Ly, K.L.; Nguyen, T.; Wang, Y.Z.; Wang, Z.Y. MIMONet: Structured-light 3D shape reconstruction by a multi-input multi-output network. Appl. Opt. 2021, 60, 5134–5144. [Google Scholar] [CrossRef] [PubMed]
  6. Zhang, S.H.; Yang, Y.X.; Shi, W.W.; Feng, L.Q.; Jiao, L.C. 3D shape measurement method for high-reflection surface based on fringe projection. Appl. Opt. 2021, 60, 10555–10563. [Google Scholar] [CrossRef] [PubMed]
  7. Cai, Y.D.; Liu, F.; Shao, X.P.; Cai, G.C. Impact of color on polarization-based 3D imaging and countermeasures. Appl. Opt. 2022, 61, 6228–6233. [Google Scholar] [CrossRef] [PubMed]
  8. Spec, A.P. 5B-2017, Threading, Gauging and Thread Inspection of Casing, Tubing, and Line Pipe Threads; API: Washington, DC, USA, 2017. [Google Scholar]
  9. Spec, A.P. 5B-2008, Specification for Threading, Gauging and Thread Inspection of Casing, Tubing, and Line Pipe Threads; API: Washington, DC, USA, 2008. [Google Scholar]
  10. Yuan, L.; Luo, M.; Cheng, Y. Analyses of “black-crest thread” phenomena on tubing and casing thread. Bao Gang Technol. 2016, 1, 69–72. [Google Scholar]
  11. Ng, H. Automatic thresholding for defect detection. Pattern Recognit. Lett. 2006, 27, 1644–1649. [Google Scholar] [CrossRef]
  12. Kumar, A. Computer-vision-based fabric defect detection: A survey. IEEE Trans. Ind. Electron. 2008, 55, 348–363. [Google Scholar] [CrossRef]
  13. Ngan, H.; Pang, G.; Yung, N. Automated fabric defect detection—A review. Image Vis. Comput. 2011, 29, 442–458. [Google Scholar] [CrossRef]
  14. Ong, C.; Krummenacher, G.; Koller, S.; Kobayashi, S.; Buhmann, J. Wheel defect detection with machine learning. IEEE Trans. Intell. Transp. Syst. 2017, 19, 1176–1187. [Google Scholar]
  15. Mildenhall, B.; Srinivasan, P.P.; Tancik, M.; Barron, J.T.; Ramamoorthi, R.; Ng, R. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. In Computer Vision—ECCV 2020; Springer International Publishing: Cham, Switzerland, 2020. [Google Scholar]
  16. Wang, H.; Dang, L.; Li, Y.; Nguyen, T.; Moon, H. DefectTR: End-to-end defect detection for sewage networks using a transformer. Constr. Build. Mater. 2022, 325, 126584. [Google Scholar]
  17. Tulbure, A.; Dulf, E. A review on modern defect detection models using DCNNs—Deep convolutional neural networks. J. Adv. Res. 2022, 35, 33–48. [Google Scholar] [CrossRef] [PubMed]
  18. Downey, A.; Fu, Y.; Yuan, L.; Zhang, T.; Pratt, A.; Balogun, Y. Machine learning algorithms for defect detection in metal laser-based additive manufacturing: A review. J. Manuf. Process. 2022, 75, 693–710. [Google Scholar]
  19. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Strasbourg, France, 27 September–1 October 2015; pp. 234–241. [Google Scholar]
  20. Zhou, Z.; Siddiquee, M.M.R.; Tajbakhsh, N.; Liang, J. UNet++: Redesigning skip connections to exploit multiscale features in image segmentation. IEEE Trans. Med. Imaging 2020, 39, 1856–1867. [Google Scholar] [CrossRef] [PubMed]
  21. Aoki, Y.; Goforth, H.; Srivatsan, R.; Lucey, S. PointNetLK: Robust & Efficient Point Cloud Registration Using PointNet. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 19–24 June 2019; pp. 7163–7172. [Google Scholar]
  22. Bai, X.Y.; Luo, Z.X.; Zhou, L.; Fu, H.B.; Quan, L.; Tai, C.L. D3Feat: Joint Learning of Dense Detection and Description of 3D Local Features. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 6359–6367. [Google Scholar]
  23. Jian, Z.; Yew, L.; Lee, G.H. RPM-Net: Robust Point Matching Using Learned Features. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 11824–11833. [Google Scholar]
  24. Luo, Y.; Mi, Z.; Tao, W. DeepDT: Learning Geometry from Delaunay Triangulation for Surface Reconstruction. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual, 2–9 February 2021; Volume 35, pp. 2277–2285. [Google Scholar]
  25. Donoghue, W.F. Distributions and Fourier Transforms; Academic Press: Cambridge, MA, USA, 1971. [Google Scholar]
  26. Heaton, J.; Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
  27. Loshchilov, I.; Hutter, F. Fixing Weight Decay Regularization in Adam. arXiv 2017, arXiv:1711.05101v2. [Google Scholar]
Figure 1. (a) The oil casing thread with black-crest threads. (b) The corresponding locally enlarged figure.
Figure 1. (a) The oil casing thread with black-crest threads. (b) The corresponding locally enlarged figure.
Processes 11 02168 g001
Figure 2. The overall algorithm figure of defect detection.
Figure 2. The overall algorithm figure of defect detection.
Processes 11 02168 g002
Figure 3. Schematic diagram of high-precision 3D reconstruction process of the object.
Figure 3. Schematic diagram of high-precision 3D reconstruction process of the object.
Processes 11 02168 g003
Figure 4. Three-dimensional point cloud registration process.
Figure 4. Three-dimensional point cloud registration process.
Processes 11 02168 g004
Figure 5. Main process of the accuracy-improving algorithm for thread shape.
Figure 5. Main process of the accuracy-improving algorithm for thread shape.
Processes 11 02168 g005
Figure 6. The 3D reconstruction of the obvious black-crested thread.
Figure 6. The 3D reconstruction of the obvious black-crested thread.
Processes 11 02168 g006
Figure 7. The 3D reconstruction of the less-obvious black-crested thread.
Figure 7. The 3D reconstruction of the less-obvious black-crested thread.
Processes 11 02168 g007
Figure 8. The 3D reconstruction of the qualified thread.
Figure 8. The 3D reconstruction of the qualified thread.
Processes 11 02168 g008
Figure 9. The transverse figure of the 3D reconstruction in the oil casing thread under different conditions, in which the black line denotes the result obtained from the novel 3D reconstruction technology and the red circle represents the result from the Coordinate Measuring Machine. (ae) show the case with black-crested threads and (fj) show the case of qualified oil casing threads.
Figure 9. The transverse figure of the 3D reconstruction in the oil casing thread under different conditions, in which the black line denotes the result obtained from the novel 3D reconstruction technology and the red circle represents the result from the Coordinate Measuring Machine. (ae) show the case with black-crested threads and (fj) show the case of qualified oil casing threads.
Processes 11 02168 g009
Figure 10. Black-crested thread of the oil casing thread.
Figure 10. Black-crested thread of the oil casing thread.
Processes 11 02168 g010
Table 1. Comparisons of thread crest height between this technique and measurements of traditional techniques (mm).
Table 1. Comparisons of thread crest height between this technique and measurements of traditional techniques (mm).
The novel 3D reconstruction technology
0.42160.51290.66550.82110.94860.98280.97990.97920.98230.9825
The Coordinate Measuring Machine
0.42150.51300.66540.82080.94890.98260.97990.97930.982509825
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, Z.; Bai, X.; Yu, Z.; Chen, Z.; Feng, N.; Ai, Y.; Song, S.; Xue, L. A Novel Three-Dimensional Reconstruction Technology for the Defect Inspection of Tubing and Casing. Processes 2023, 11, 2168. https://doi.org/10.3390/pr11072168

AMA Style

Huang Z, Bai X, Yu Z, Chen Z, Feng N, Ai Y, Song S, Xue L. A Novel Three-Dimensional Reconstruction Technology for the Defect Inspection of Tubing and Casing. Processes. 2023; 11(7):2168. https://doi.org/10.3390/pr11072168

Chicago/Turabian Style

Huang, Zhiqiang, Xiaoliang Bai, Zhi Yu, Zhen Chen, Na Feng, Yufeng Ai, Shigang Song, and Lili Xue. 2023. "A Novel Three-Dimensional Reconstruction Technology for the Defect Inspection of Tubing and Casing" Processes 11, no. 7: 2168. https://doi.org/10.3390/pr11072168

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop