Next Article in Journal
Augmenting Knowledge for Individual NVR Prediction in Different Spatial and Temporal Cross-Building Environments
Previous Article in Journal
Introducing Security Mechanisms in OpenFog-Compliant Smart Buildings
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Correction Method for Perspective Distortions of Pipeline Images

1
School of Mechanical Engineering, Hubei University of Technology, Wuhan 430068, China
2
Graduate School of Sciences and Technology for Innovation, Tokushima University, Tokushima 770-8506, Japan
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(15), 2898; https://doi.org/10.3390/electronics13152898
Submission received: 16 June 2024 / Revised: 16 July 2024 / Accepted: 22 July 2024 / Published: 23 July 2024

Abstract

:
It is common to find severe perspective distortion in a pipeline’s image in medium-diameter pipeline defect detection by the panoramic image unwrapping method, resulting in low-quality image unwrapping and stitching, which is caused by the camera’s optical axis being completely deviated from the pipeline’s center. To solve this problem, a novel correction method for reducing perspective distortion in pipeline images was proposed for pipeline defect detection. Firstly, the method enhances the edges of unevenly illuminated regions within a pipeline to facilitate image segmentation and identify key points necessary for correcting perspective distortion. Then, a six-feature-point extraction method was proposed for a circle target to establish the projection relationship between the extracted feature and mapped points on the reference circle. Finally, a perspective matrix was constructed to complete the perspective transformation correction of the distorted images. The results show that the average correction rate and the average relative error of the proposed correction method can reach 90.85% and 1.31%, respectively. The study innovatively used the enhancement of uneven illumination to find distorted edge information. It proposed an extraction method using a reference circle and six key feature points to build a mapping model. It can provide a novel method which can be used to obtain a superior image for pipeline detection and lay a solid foundation for subsequent high-quality pipeline image stitching.

1. Introduction

In recent years, pipelines have played a critical role in transporting natural gas, oil, and water to urban areas. However, some of these pipelines have been operating for several decades, resulting in the chronic or minor leakage of hazardous substances. Failure to address these issues within prescribed time limits can lead to extensive environmental pollution or even catastrophic explosions [1]. To mitigate the severe consequences of pipeline leakages, monitoring pipeline conditions closely is imperative. Fortunately, pipeline imaging offers a comprehensive view of pipelines, making it a crucial tool for safeguarding pipeline integrity and averting the adverse effects of pipeline failures. To ensure the prolonged and safe use of pipelines, substantial research has been conducted to examine pipeline systems. Notably, Niresh et al. [1] have conducted extensive research on chronic pipeline leaks and have critically evaluated various detection techniques, elucidating the limitations of popular methods such as real-time transient monitoring (RTTM) and sequential probability ratio testing (SPRT). Given the challenges of inspecting small-diameter pipelines directly, the rapid evolution of machine vision technology has popularized panoramic image-based detection methods due to their efficiency and convenience [2]. Furthermore, this approach facilitates the quantitative analysis of pipeline defects [3,4].
Panoramic images of pipelines are usually obtained through image unwrapping and stitching [5,6]. However, it is difficult to ensure that the camera’s optical axis coincides with the center of the pipeline when the robot is in motion. This would result in significant perspective distortion in the captured images, which makes it difficult to unwrap and stitch the images subsequently. Hence, perspective distortion correction of a pipeline’s image is a key step to ensure the accuracy and quality of the image unwrapping and stitching [7,8,9].

1.1. Correction Methods of Pipeline Image Distortion

Many studies have been conducted to correct the perspective distortion of pipelines’ images. Huang et al. [9] proposed a correction algorithm for pipeline images with a center offset. They established the mathematical model of imaging under the offset and introduced the center offset vector to revise the displacement offset. However, the correction algorithm ignores the displacement and angle offset existing side by side in the actual collected images. Qian et al. [10] established projection models of four different poses, including positions of no displacement but angle offset, no angle offset but displacement offset, and both displacement and angle offset to correct the offset image. Bu et al. [11]. used the circle center fitting method to obtain the center parameters of a panoramic pipeline image to unwrap the image rapidly and eliminate the impacts of the displacement and angle offset. However, this method does not consider the condition in which the perspective distortion will be more significant with increasing displacement and angle offset. Wu et al. [12] corrected the perspective distortion of an image by extracting contour corner points of polygons to establish the coordinate relationships before and after perspective transformation. However, this method is not suitable for pipeline images without corner points. Many studies have used structural light and centralizers directly or indirectly to align the camera’s optical axis with the center of the pipeline to achieve high-quality unwrapping and stitching pipeline images [13,14,15,16]. Nonetheless, the methods mentioned above are applied with expensive equipment, which is costly to maintain and has low versatility. Also, they are not often used in practical engineering.

1.2. Correction Methods of Perspective Transformation

There has been a breakthrough in correcting distorted images by perspective transformation in recent years. Ji et al. [17] presented a method to calibrate a circular pointer-type meter based on the Yolov5s network. Among various types of information, the position information and value information of the detected scale values are used to fit the elliptic equation of the scale value position with the least square method for perspective correction and rotation correction of the meter. Wang et al. [18] proposed an improved method of perspective transformation based on the imaging principle of a projector and a camera, using the gradient-direction subpixel edge fitting method to determine the perspective transformation matrix to realize the correction of a structured light image. This novel method of extracting structured light edges provides a new idea for establishing a perspective transformation matrix. Hu et al. [19] proposed a contour-based automatic perspective correction algorithm for circular meters by estimating the perspective transformation matrix by fitting an ellipse suitable for images with minor distortion. Chen et al. [20] proposed a method for processing perspective transformation by grayscale, filtering, edge, and line image detection. This method demonstrated that obtaining a perspective transformation matrix through image edge enhancement is possible.

1.3. The Perspective Distortion of Pipeline Image

The camera’s optical axis will inevitably shift from the center of the pipeline when moving during image acquisition, resulting in angle and displacement deviation of the image projected on the two-dimensional plane of the measured object. Thus, this will cause perspective distortion of the image. The main limitation of the present correction method by establishing a mathematical model is that it only analyzes and corrects the angle or displacement offset and lacks the analysis of real environment. In addition, these methods fail to analyze the characteristics of various types of distorted images by thoroughly combining the projection model and real environment, causing them only to demonstrate a good correction result on a specific offset situation. Although the correction approach using perspective transformation is mature, it only shows better correction for images with many feature points and makes it easy to establish a mapping model. For most pipeline environments, selecting feature points and establishing mapping relationships are very difficult, which seriously affects the correction results of such methods. Although most scholars have fundamentally solved the problem that the camera’s optical axis does not coincide with the center of the pipeline by adding external devices such as a centralizer and structured light, such equipment is expensive and difficult to apply to practical engineering.
To solve the above problem, a perspective distortion of pipeline image correction method is proposed to obtain ideal images with high precision by the input distortion image. This method innovatively improves the mapping of key points by taking the unevenly illuminated area as the region of interest. Then, for the perspective transformation of circular targets, an innovative method for extracting six feature points is proposed, along with the establishment of a perspective transformation matrix. Firstly, the method enhances the edges of unevenly illuminated regions within the pipeline to facilitate image segmentation and identify key points necessary for correcting perspective distortion. Then, a six-feature point extraction method is proposed for the circle target to establish the projection relationship between the extracted feature and mapped points on the reference circle. Finally, the perspective matrix is constructed to complete the perspective transformation correction of the distorted images. The results show that the average correction rate and the average relative error of the proposed method can reach 90.85% and 1.31%, respectively. In addition, the correction error of the proposed method is about 4.32% higher than that of the method that only corrects the angular offset by establishing a projection model [10]. Compared with the method that can only correct displacement offset, the correction error is improved by 3.69% [8]. The perspective distortion correction method based on six feature points is improved by about 2% compared with other perspective transformation correction methods [19]. This method can obtain a better pipeline image and reduce the workload of the subsequent stitch pipeline image.

2. Theoretical

The theoretical perspective distortion correction method consists of four steps. Firstly, projection models are established to analyze the perspective situation of a pipeline image. Secondly, the area of uneven illumination is viewed as the outer contours and extracted by contrast enhancement and edge detection. Subsequently, the reference circle is established, and feature points are extracted based on outer contours to build a coordinate map model. Finally, the perspective transformation model is established to complete the correction process.

2.1. Introduction of Pipeline Robot

According to the characteristics of the pipeline environment, an adaptive pipe-diameter wheel pipeline robot was designed, as shown in Figure 1a. The robot possesses the following advantages: (1) Waterproof: The critical components of the robot, including the motor and the connection wire, are waterproof and can be used in shallow-water pipelines with pipe diameters of 180–220 mm. In addition, another three-stage spiral shovel-cleaning pipeline robot was also built [3], as shown in Figure 1b. The spiral pipeline robot can effectively clean a pipeline before using the wheeled pipeline robot and reduce the influence of dirt during pipeline detection. (2) Adaptive pipe diameter: The telescopic function of the robot is achieved with an electric push rod, which can realize the wheeled robot’s free expansion in a pipe diameter of 180–220 mm. (3) High security: Due to the complex pipeline environment, an external power supply is used to power the pipeline robot, as shown in Figure 1a, after a complete analysis of the operating performance and signal transmission requirements of the pipeline robot and image acquisition equipment. One end of the wire is connected to the drive device of the pipeline robot, and the other is connected to the operating box. The power supply system is embedded in the operating box and integrates the control functions of the pipeline robot. The battery power supply makes the design more straightforward, making the pipeline robot easy to operate and allowing it to run for a long time. It also allows technicians and pipe robots to keep in touch and complete tasks safely.

2.2. Establishment of the Projection Model

The wheeled pipeline robot is shown in Figure 1a. It will inevitably shake when it moves in a pipeline, resulting in deviation between the optical axis of the mounted camera and the center of the pipeline, which then distorts the collected image. As shown in Figure 2, a projection model was established to simulate the four possible poses of the camera in a pipeline and the distortion of the collected images under the corresponding poses.
Figure 2 shows the projection model of four poses, in which the pipeline center and the camera’s optical axis are P m and O a , respectively, and the projection plane is V P . The upper and lower walls of the pipeline are P u and P d , respectively; the two sets of vertices in the pipeline are P i ( i = 1 , 2 , 3 , 4 ) ; P i ’s projection points on the projection plane are P i ( i = 1 , 2 , 3 , 4 ) ; and the image’s principal point and the camera’s optical center are   P p and O , respectively. C 1 and C 2 are the center of the inner and outer contours, respectively.
The position of the camera changes in the following ways through the analysis of the above projection model:
In the ideal position, as shown in Figure 2a, the camera’s optical axis O a is coaxial with the center of the pipeline P m , and the principal point   P p is at the intersection of the projection plane V p . The image in this position is the most ideal, reflected in the concentric circle formed by the coincidence of the inner and outer contour centers on the image. The unwrapped image in this position can genuinely reflect the condition of the inner wall of the pipeline.
When there is only an angle offset of the camera, as is shown in Figure 2b, the camera’s optical center O is on the center of the pipeline P m . The optical axis O a is α from the center of the pipeline, and the projection plane V p shows a corresponding α offset to V p . In this position, the image owns different degrees of distortion of the inner and outer contours due to the perspective distortion. Among them, the inner contour distortion is slight, and the outer contour distortion is large. However, the two-dimensional projection model cannot truly describe the distortion on both sides of the inner wall of the pipeline. Therefore, the outer contour conforms to the elliptical property by marking and fitting the contour of the inner and outer contours on this position’s image, as shown in Figure 3a.
As shown in Figure 2c, when the optical center O offsets the pipeline center P m at distance L, displacement is offset between the optical axis O a and the center of the pipeline P m . In this position, the projection on the projection plane V p is presented as two contours with different centers, and the center distance between the two contours depends on the offset distance of the optical center.
In the real environment, the above situation will not exist alone. Usually, there are both angle and displacement offsets, as shown in Figure 2d. When the distance between the optical center O and the center of the pipeline is L, and the optical axis O a is α with the center of the pipeline, and the projection plane V p displays a corresponding α offset to V p . At this time, the inner and outer contours are as shown in Figure 3b. Similar to the characteristics of the image with only angle offset, the image shows a perspective distortion with angle offset and displacement offset. The distortion degree of the inner contour is much smaller than the outer contour, and the outer contour conforms to the geometric properties of the ellipse. The ratio of the ellipse’s long axis to the short axis depends on the camera’s offset angle and the circle center’s relative position related to the distance between the optical axis and the center of the pipeline. In this situation, the unwrapped image dramatically affects the quality of the panoramic image.

2.3. Extraction of the Region of Interest (ROI)

In this paper, the ROI is the uneven illumination area that can describe the perspective of image distortion. When collecting images, the intensity of radial light attenuates as distance increases gradually, resulting in the uneven illumination area in the pipeline image. The region shown in Figure 4 displays the same properties as that of the elliptical region formed by perspective distortion of the distorted image. However, this region is difficult to extract because of the slight gray gradient of the edge. The image needs to be enhanced to highlight the edge information to extract the ROI.
To extract the ROI accurately using the global threshold segmentation algorithm [14], this paper highlights the uneven illumination area of the camera’s light source in the pipeline through contrast enhancement, and then detects the edge and fits the image to complete the extraction.
In edge detection, the Canny algorithm is applied to calculate the gradient intensity and direction to find the edge in two steps [21,22]. As the edge and gradient-direction characteristics are perpendicular, the differences in the x and y directions can be obtained by (1). Then, the modulus and direction of the gradient vector can be determined according to (2), where the difference between g m and g n is calculated using the 3 × 3 Sobel operator.
g m m , n = f m + 1 , n f m , n g n m , n = f m , n + 1 f m , n
g m , n = g m 2 m , n + g n 2 m , n g m , n = a r c t a n g n m , n g m m , n
where g m m , n and g n m , n represent the difference in the x and y directions of points ( m , n ) , respectively; f m , n , f m + 1 , n , and f m , n + 1 represent the pixel value of points ( m , n ) , ( m + 1 , n ) , and ( m , n + 1 ) , respectively; g m , n and g m , n represent the gradient intensity and gradient direction, respectively.
For the obtained edge direction, the gradient directions are divided into eight directions (0 ° , 45 ° , 90 ° , 135 ° , 180 ° , 225 ° , 270 ° , and 315 ° ), which are calculated to obtain the adjacent pixels of the specific pixel gradient direction. To eliminate the unintended detection of the edge, non-maximum suppression of the gradient amplitude is performed via (3). All points on the gradient matrix are traversed. If the gray level M m , n of a pixel is not the largest compared with the gray level T of each of the two pixels before and after in the gradient direction, then the pixel value of the gray level M m , n is set to 0, which means the pixel is not the edge. Finally, the noise points and false edges of the non-maximum suppressed image are eliminated by setting the upper and lower bounds of the threshold, and the real edges are determined and stitched via the double thresholds method.
M T m , n = M m , n , i f   M m , n > T 0 , o t h e r w i s e
where M T m , n represents the max of gradient intensity; M m , n represents the gradient intensity in one direction; T represents the gradient intensity in the other direction.

2.4. Establishment of the Reference Circle and Extraction of Feature Points

For the establishment of the perspective transformation matrix, Ji et al. [17] and Hu et al. [19] estimated the perspective transformation matrix by fitting a meter image’s ellipse contour, which is not appropriate for pipeline images. Kyungkoo [23] established the perspective matrix by extracting four corner point coordinates of a rectangle vehicle license plate, which is also not appropriate for the circular contour of pipeline images. Therefore, to establish coordinate mapping and eliminate perspective distortion, the feature points on the ROI of a pipeline image need to correspond to the reference circle. This hardly causes perspective distortion of the inner circle because of the small projection angle [24]. Therefore, the edge extraction and least squares method are used to fit the inner circle contour and extract the circle’s center, as shown in (4).
f = x i x c 2 + y i y c 2 R 2
where f represents the sum of the squares of the distances between the coordinate information ( x i , y i ) to the center ( x c , y c ) of the circle and the radius R of the reference circle for any point on the inner circle.
In (4), the circle center ( x c , y c ) and the radius R are the unknown parameters, and the solving process is cumbersome. However, the minimum fitting circle can be regarded as the problem of finding the fastest direction of gradient descent by extracting the coordinates of the circle’s center. Therefore, (4) can be simplified as
f x = x i x c 2 f y = y i y c 2
where f x and f y represent the sum of squared distances in the x and y directions, respectively.
After the center of the reference circle is confirmed, the radius of the reference circle determines the correction result. If the extreme point of the outer contour is set to the radius, all are not accurate correction mapping points in the range of only discussing the extreme near and far points from the center of the circle. So, (6) is used to obtain the maximum and minimum distance, referred to as d m a x , m i n , between the inner circle and outer contour. Then, the center of the inner circle is taken as the center of the reference circle, and the average distance between the inner and outer circles is calculated as the radius to establish the reference circle, as shown in Figure 5.
d m a x , m i n = x r C o l n ) 2 + y r R o w n 2
where d max ,   min represents the distance extreme of any point ( x r , y r ) on the inner contour and any point ( Col n , Row n ) on the outer contour.
Compared with polygonal contours, feature points’ information of circular contours in pipeline images is difficult to extract. Moreover, the typical extraction of four-feature points is not ideal for correcting circular contours [23]. Therefore, six key feature points selected on the outer circle contour and the corresponding mapping information on the reference circle are extracted to establish the mapping relationship, conducted in the following steps.
The maximum and minimum points between the inner and outer contour reflect the point with the most serious distortion in the outer contour. Therefore, the points on the outer circle contour with the maximum and minimum distance to the inner circle contour are selected as feature points 1 and 2. Then, two lines between the center of the reference circle and feature points 1 and 2 are drawn. The intersection points where the two lines and the reference circle contour overlap are taken as mapping points 1′ and 2′, respectively.
Meaningful points are extracted to find feature points 3, 4, 5, and 6, and the minimum outer rectangle can describe the minimum length and width formed by the outer contour. Therefore, the method of extracting the minimum bounding rectangle of the outer circle contour is adopted. The bounding rectangle is rotated via (7); in the meantime, coordinates of the vertices and the rotation angle of the simple bounding rectangle are recorded. All the simple bounding rectangles are obtained, and the minimum is the minimum bounding rectangle of the outer circle contour.
x 2 = x 1 x 0 cos θ y 1 y 0 sin θ + x 0 y 2 = x 1 x 0 sin θ y 1 y 0 cos θ + y 0
where x 0 , y 0 and θ represent the center of rotation and the rotation angle; x 1 , y 1 and x 2 , y 2 represent the coordinates of any point on the simple bounding rectangle before and after rotation.
Consequently, the tangent points where each side of the minimum bounding rectangle and the contour of the outer circle meet are taken as the key feature points. Then, the remaining feature points are connected with the center of the reference circle, respectively, and the intersection point between the line and the contour of the reference circle is taken as the mapping point. The extraction process is shown in Figure 6.

2.5. Perspective Transformation of the Image

The perspective transformation matrix is established according to (8). The original pipeline image processed is two-dimensional, so z is given the value 1 to simplify the operation process, and the three-dimensional coordinate system is converted into a Cartesian coordinate system, where a 33 equals 1.
x , y , z = x , y , z a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33
where [ x ,   y ,   z ] and [ x ,   y ,   z ] represent the coordinates before and after transformation. And a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 is the transformation matrix.
After (8) is organized, (9) can be obtained. And the coordinate relation after perspective transformation can be deduced, as shown in (10).
X = x a 11 + y a 21 + a 31 Y = x a 12 + y a 22 + a 32 Z = x a 13 + y a 23 + 1
x = X Z = x a 11 + y a 12 + a 13 x a 31 + y a 32 + 1 y = Y Z = x a 21 + y a 22 + a 23 x a 31 + y a 32 + 1
Consequently, the six feature points x n , y n ( n = 1 , 2 6 ) and the mapping points x n , y n ( n = 1 , 2 6 ) are taken into (10) to obtain a system of linear equations in matrix form, as shown in (11). Then, the least square solution of (12) can be obtained from   T = A T A 1 A T . Finally, the pipeline distortion image is corrected by perspective transformation.
x 1 y 1 1 0 0 0 x 1 x 1 x 1 y 1 0 0 0 x 1 y 1 1 y 1 x 1 y 1 y 1 x 2 y 2 1 0 0 0 x 2 x 2 x 2 y 2 0 0 0 x 2 y 2 1 y 2 x 2 y 2 y 2 x 6 0 y 6 0 1 0 0 x 6 0 y 6 0 1 x 6 x 6 y 6 x 6 x 6 y 6 y 6 y 6 a 11 a 21 a 31 a 12 a 22 a 32 a 13 a 23 = x 1 y 1 x 2 y 2 x 6 y 6
A = x 1 y 1 1 0 0 0 x 1 x 1 x 1 y 1 0 0 0 x 1 y 1 1 y 1 x 1 y 1 y 1 x 2 y 2 1 0 0 0 x 2 x 2 x 2 y 2 0 0 0 x 2 y 2 1 y 2 x 2 y 2 y 2 x 6 0 y 6 0 1 0 0 x 6 0 y 6 0 1 x 6 x 6 y 6 x 6 x 6 y 6 y 6 y 6 ,   T = a 11 a 21 a 31 a 12 a 22 a 32 a 13 a 23 ,   B = x 1 y 1 x 2 y 2 x 6 y 6
where A represents the matrix of six feature points; T represents the transformation matrix; and B represents the mapping point matrix.

3. Experiment

In this paper, the HJ-G90A industrial endoscope is selected with vertical and horizontal visual angles up to 140°, which can collect images of the inner wall of a pipeline with a pixel size of 1280 × 720. Moreover, using the proposed method, the distorted image correction of a pipeline with an inner diameter of 200 mm is realized. The maximum distortion range of the pipeline robot used in this paper is plus or minus 10 mm displacement deviation plus or minus 3° angle deviation. The pipeline images of four poses are collected within the maximum allowable distortion range (plus or minus 15 mm displacement deviation plus or minus 8° angle deviation) to improve the universality and robustness of the algorithm; when beyond this range, the pipeline image will lose its processing meaning due to the loss of part of the inner wall information. Meanwhile, the image of the inner wall of the highly transparent plexiglass pipe with checkerboard paper on the inner wall is also collected similarly for the subsequent correction method comparison verification and data analysis. In checkerboard pipeline processing, manual labeling is used to extract the inner and outer edges because it is impossible to extract uneven lighting areas, and the experiment focuses on data analysis before and after correction. The correction process flow chart is shown in Figure 7.

3.1. The ROI Extraction

In the process of image acquisition, not only will the radial distortion of the image appear, but the introduced noise will also cause serious damage to the image quality due to the camera’s characteristics. Therefore, Zhang’s calibration method [25] is adopted to eliminate the radial distortion in the pipeline image caused by the camera, and the ROI extraction process flow chart is shown in Figure 8. Then, to retain the edges and details of the ROI in the pipeline image and make the image clearer, the two-dimensional discrete Gaussian function is applied to smooth the image based on (13) [26,27]. The parameter of the convolution mask of Gaussian filtering waves will affect the blurriness of the image [28]. Since blurriness decreases with the increase in the distance from the mask center, the parameter of the convolution mask is set to 5 to maintain the overall details of the original image to the maximum extent. The filtered pipeline image is shown in Figure 9.
f x , y = 1 2 π σ 2 e x 2 + y 2 2 σ 2
where x 2 and y 2 represent the distance with the other pixel and center pixel in the mask; σ represents the standard deviation.
From Figure 9, it is easy to conclude that the edge information of the image processed by Gaussian filtering is well preserved. In addition, the gray gradient of the edge is increased by enhancing the contrast to maximize the gray level of the image based on (14) [29,30]. The mask traversal image is established, and the difference between the maximum and minimum pixel values are taken as the values of the central pixel points of the mask. The scale factor Mult and offset Add are determined. Finally, the established linear relationship is utilized to enhance the image contrast. The pipeline image after contrast enhancement is shown in Figure 10a.
g = g × M u l t + A d d M u l t = 255 G M a x G M i n A d d = M u l t × G M i n
where g and g represent the pixel value of after and before enhancement; G M a x and G M i n represent the maximum and minimum gray level of the image, respectively; Mult and Add represent the proportionality factor and correction factor, respectively.
The gray gradient at the edge is enhanced to some extent by enhancing the image’s contrast, but the contrast should be further stretched due to the interference of the axial texture of the pipeline. This paper applies the image self-multiplication method to stretch the contrast and enhance the image, as shown in (15).
g = g × g × M u + A
where g and g represent the pixel value of after and before enhancement; Mu and A represent the calibration factor and correction factor, respectively.
The key to image self-multiplication is the selection of correction factor. By controlling the correction factor, pixels with relatively high gray levels can be excluded to suppress interference. Selecting different correction factors can achieve different processing effects. Figure 10b,c illustrate the processing effects when the correction factors are 0.005 and 0.03.
When the correction factor is 0.03, the interference can be better suppressed, and the uneven illumination area of the pipeline image is presented. For the ROI extraction, the traditional threshold segmentation method will lead to many interference points, and some edge information will be lost during processing interference, which is not conducive to ROI extraction. Figure 11a shows the image processed by the global threshold segmentation method.
The edge is detected by calculating gradient intensity, as shown in Figure 11b. The inner circle contour is eliminated by feature screening as an interference item. Then, the edge of the outer circle is fitted. The fitted image is shown in Figure 11c.

3.2. Correction of Perspective Distortion

The extracted ROI describes perspective distortion during distortion, which is usually eliminated by affine transformation or mask matching [31,32,33], but the correction result is not ideal. In this paper, the mapping relationship between the outer contour and the reference circle is constructed to complete the correction process by improving the extraction method of feature points of the circular contour. The reference circle is established through the theoretical model in Section 2.4, as shown in Figure 12a,b. Then, feature points and corresponding mapping points are extracted and processed according to Figure 12c,d.
The distorted image of the pipeline is corrected by the perspective transformation matrix. The correction results are displayed in Figure 13, and the pipeline image is corrected. Compared with Figure 13b,c, the texture in the inner wall of the pipeline is also repaired after correction.

3.3. Experimental Result and Analysis

The correction results of the offset image are displayed in undisposed, processing, undisposed contour, and processed contour images of three poses, as shown in Table 1, Table 2 and Table 3. Furthermore, to further evaluate the performance of the proposed method, the contours of the three poses mentioned above were unwrapped to highlight differences before and after the correction method was applied. Table 4, Table 5 and Table 6 present the unwrapped results for these three poses.
As shown in Table 1, it can be observed that in conditions with only displacement offset, the undisposed contour exhibits no more distortion than the processed contour. In this context, the overall correction result is satisfactory, and it remains stable as the displacement offset increases. However, as shown in Table 4, the displacement offset does have a certain impact on the quality of the unwrapped image. With an increase in the displacement offset, the unwrapped image exhibits some deviation, and the “wave” becomes more pronounced. When the processed image is unwrapped, the expansion deviation can be significantly reduced, and the correction result remains stable as the displacement offset changes.
Table 2 presents the results for the condition with only an angle offset. It is evident that an increase in the lens angle gradually compresses the available pipeline image information. The larger the angle, the more severe the compression of the inner wall information, which is reflected in the reduction in the number of squares in the chessboard pattern. Furthermore, as shown in Table 5, the impact of the angle offset is not limited to the reduction in the chessboard pattern but also results in a more pronounced “wave” as the angle offset increases. The image deviation is most pronounced when the angle offset reaches 8°. This is evident in the unwrapped image, where the number of checkerboards is minimized, and the wave distortion is most severe. However, when the correction method is applied, the perspective projection can be effectively transformed into a parallel projection, yielding a notable improvement in the correction of the compressed image to a certain extent. Among these observations, the correction results are highly favorable when the angle offset is below 6°, as depicted in Table 2. A comparison with the uncorrected image reveals that the inner and outer contours of the corrected image closely approximate concentric circles, demonstrating the efficacy of the correction method.
Additionally, Table 5 shows the effect of the correction method on the unwrapped image. When the angle offset is less than 6°, the corrected image significantly reduces the “wave” phenomenon compared to the image prior to correction. However, when the angle offset is 8° or greater, while the correction aligns the outer and inner contour into concentric circles, information compression persists, and the “wave” distortion remains evident in the unwrapped image. Several factors contribute to this phenomenon, as follows:
  • During the image acquisition process, there may be deviations between the optical axis of the endoscope and the pipeline’s center, resulting in measurement errors.
  • Errors in the chessboard paper placement process may introduce inaccuracies.
  • The scaling ratio of pipeline information may not be consistent with increasing angles.
In practical engineering applications, acquired images often exhibit angle and displacement offset. The experimental results presented in Table 3 and Table 6 effectively simulate the correction performance of the proposed method in real-world scenarios. In the experimental results of this section, only the correction results for gradually increasing the displacement deviation while keeping the angle at the limit position are displayed to emphasize the effectiveness of the correction method. As indicated in Table 3, unprocessed images encompass displacement and angular deviations. When the angle remains constant, an increase in displacement offset reduces the number of checkerboards, creates more pronounced inner and outer contour misalignment, and gradually compresses image information. In Table 6, with the angle offset held constant, the gradually increasing displacement offset slightly exacerbates the “wave” phenomenon in the unwrapped images. Analyzing the results from both Table 3 and Table 6 and comparing images before and after processing, it becomes evident that processed images exhibit substantial improvements, whether in their unwrapped or unprocessed form. This further corroborates the effectiveness of the correction method, particularly when dealing with concurrent displacement and angle offset.
The information compression of the “wave” trough position of the image corrected by the method is restored, which has a good processing effect for simulating the comprehensive situation of the displacement and angle offset in the real pipeline. Then, we take the inner and outer circle center coordinates before and after the correction method processing into (16) to test its correction rate and the concentricity of the image.
C = X 2 X 1 + Y 2 Y 1 2 x 2 x 1 + y 2 y 1 2 X 2 X 1 + Y 2 Y 1 2
where C represents the correction rate; X 1 , Y 1 , X 2 , Y 2 and x 1 , y 1 , x 2 , y 2 represent the inner and the outer circle center coordinates before and after algorithmic correction.
Table 7 displays the data of the correction rate under the above offset poses in Table 1, Table 3 and Table 5. The overall deviation correction rate can reach 84.5%. Generally, except for the case of limit deviations where the correction rate is slightly lower, the proposed method shows a high correction rate. In the actual working environment, the endoscope hardly works at limited positions, and the actual range of angle deviation is about plus or minus 3°. In the actual working environment, the correction rate of the image corrected by the algorithm can reach 90.85% theoretically.
To further certify the performance of the correction method, marks were left on the chessboard pipeline circle image without distance offset and angle offset, as shown in Figure 14. Ten points were randomly selected from the circle corresponding to the marks, whose average radius, standard deviation, and average relative error were recorded as shown in (17), and the statistical data table is shown in Table 8.
r ¯ = r n 10 ( n = 0 , 1 9 ) s = r n r ¯ 2 10 ( n = 0 , 1 9 ) δ ¯ = r n r ¯ 10 r ¯ ( n = 0 , 1 9 )
where r ¯ represents the average radius, r n ( n = 0 , 1 9 ) represents the radius of the randomly selected point corresponding to the marks, s represents standard deviation, and δ ¯ represents average relative error.
The radius of standard deviation and the average relative error of different poses and different radius numbers were statistically analyzed. Coordinates corresponding to different marks were utilized to collect the data for the three poses, and the results are shown in Table 9, Table 10 and Table 11. The radius information of marks 3 and 5 corresponding to the three poses was drawn to directly observe the reliability of the correction method, as shown in Figure 15.
The data presented in Table 9 demonstrate that, in the presence of solely translational displacement in the image, the average relative error in radius at the identified circles 5, 6, and 7 is a mere 0.39%. Likewise, in Table 10, when the image solely exhibits angular displacement, the average relative error in radius at circles 3, 4, and 5 is merely 5.82%. Furthermore, in the scenario where translational and angular displacements coexist in the image, as shown in Table 11, the average relative error in radius at circles 1, 2, and 3 is only 8.83%.
Combined with Table 8, Table 9 and Table 10 and Figure 15, the data are analyzed, and the image processed by the correction method is obtained. The radius value fluctuates less, and the overall correction results are relatively ideal. There is no obvious fluctuation in the radius data caused by the jagged and local protruding in the processed image. Moreover, most of the images processed by the correction method have a little deviation from the radius value of the images without offsets, and the overall average relative error is 5.01%. The average relative error after the correction method processing is theoretically lower than 1.31% in the theoretical working environment. This section simulates the correction scenarios for three different displacement situations in the images. Compared to the center correction algorithm proposed by Huang et al. [8] for correcting translational displacement, the proposed method demonstrates an average relative error reduction of 12.53% when a 20% displacement is introduced. In contrast to the method proposed by Qian [10] for correcting image angle offset, the proposed method exhibits an error reduction of 7% for displacements within 10 degrees.

4. Conclusions

In this paper, a novel method was proposed to correct distortions in images of pipelines, resulting in high-quality unwrapping images. The feasibility of the proposed method was determined by comparing images before and after correction. The reliability and robustness of the proposed method were experimentally identified by calculating the deviation correction rate and analyzing radius data. After applying the correction method, the distorted correction image was improved, and the overall average correction rate could theoretically reach 90.85%. Importantly, the average relative error was only 1.31%. The study not only lays a foundation for subsequent image expansion and stitching but also provides a new way for future pipeline defect detection with panoramic images.

Author Contributions

Conceptualization, Z.Z., X.L. and X.H.; Software, J.Z., X.L. and X.H.; Validation, Z.Z., X.L., C.X. and L.W.; Writing—original draft, J.Z., X.L. and X.H.; Writing—review & editing, X.L., C.X., Z.Z. and L.W.; Supervision, X.L. and X.H.; Funding acquisition, X.L. and X.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (Grant No. 52003078), the Doctoral Scientific Research Foundation of Hubei University of Technology (Grant No. BSQD2020002), and the Hubei Key Laboratory of Modern Manufacturing Quality Engineering Foundation (Grant No. KFJJ-2020005).

Data Availability Statement

The datasets presented in this article are not readily available because the confidentiality of the areas covered by the project. Requests to access the datasets should be directed to [email protected].

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Behari, N.; Sheriff, M.Z.; Rahman, M.A.; Nounou, M.; Hassan, I.; Nounou, H. Chronic leak detection for single and multiphase flow: A critical review on onshore and offshore subsea and arctic conditions. J. Nat. Gas Sci. Eng. 2020, 12, 103460. [Google Scholar] [CrossRef]
  2. Gao, S.; Yang, K.; Shi, H.; Wang, K.; Bai, J. Review on Panoramic Imaging and Its Applications in Scene Understanding. IEEE Trans. Instrum. Meas. 2022, 71, 5026034. [Google Scholar] [CrossRef]
  3. Wu, T.; Lu, S.H.; Tang, Y.P. An In-pipe Internal Defects Inspection System Based on The Active Stereo Omnidirectional Vision Sensor. In Proceedings of the 2015 12th International Conference on Fuzzy Systems and Knowledge Discovery, Zhangjiajie, China, 15–17 August 2015; pp. 2637–2641. [Google Scholar] [CrossRef]
  4. Zhang, Z.; Hu, L.H.; Li, X.L. Motion analysis of screw drive in-pipe cleaning robot. J. Mech. Eng. Sci. 2022, 236, 5605–5617. [Google Scholar] [CrossRef]
  5. Bergen, T.; Wittenberg, T. Stitching and Surface Reconstruction from Endoscopic Image Sequences: A Review of Applications and Methods. IEEE J. Biomed. Health Inform. 2016, 20, 304–321. [Google Scholar] [CrossRef] [PubMed]
  6. Chong, N.S.; Kho, Y.H.; Wong, M.L.D. A closed form unwrapping method for a spherical omnidirectional view sensor. J. Image Video Process. 2013, 2013, 5. [Google Scholar] [CrossRef]
  7. Karkoub, M.; Bouhali, O.; Sheharyar, A. Gas Pipeline Inspection Using Autonomous Robots with Omni-Directional Cameras. IEEE Sens. J. 2021, 21, 15544–15553. [Google Scholar] [CrossRef]
  8. Wang, Z.H.; Tang, Z.J.; Huang, J.K. A real-time correction and stitching algorithm for underwater fisheye images. Signal Image Video Process. 2022, 16, 1783–1791. [Google Scholar] [CrossRef]
  9. Huang, B.; Li, T.J.; Wang, H.X.; Liu, X.Q.; Huang, M. On Unwrapping Pipeline Image Based on Centre Offset Correction Algorithm. Comput. Appl. Softw. 2015, 32, 196–200. [Google Scholar] [CrossRef]
  10. Qian, Q. Research on Industrial Pipeline Image Based on Endoscope Video. Master’s Thesis, XI’AN University of Science and Technology, Xi’An, China, 2020. [Google Scholar]
  11. Bu, X.Z.; Li, G.J.; Yang, B.; Wang, X.Z. Fast Unwrapping of Panoramic Annular Image with Center Deviation. Opt. Precis. Eng. 2012, 20, 2103–2109. [Google Scholar] [CrossRef]
  12. Wu, L.H.; Shang, Q.; Sun, Y.; Bai, X. A self-adaptive correction method for perspective distortions of image. Front. Comput. Sci. 2019, 13, 588–598. [Google Scholar] [CrossRef]
  13. Kawasue, K.; Komatsu, T. Shape Measurement of a Sewer Pipe Using a Mobile Robot with Computer Vision. Int. J. Adv. Robot. Syst. 2013, 10. [Google Scholar] [CrossRef]
  14. Jackson, W.; Dobie, G.; MacLeod, C.; West, G.; Mineo, C.; McDonald, L. Error Analysis and Calibration for a Novel Pipe Profiling Tool. IEEE Sens. J. 2020, 20, 3545–3555. [Google Scholar] [CrossRef]
  15. Hosseinzadeh, S.; Jackson, W.; Zhang, D.; McDonald, L.; Dobie, G.; West, G.; MacLeod, C. A Novel Centralization Method for Pipe Image Stitching. IEEE Sens. J. 2021, 21, 11889–11898. [Google Scholar] [CrossRef]
  16. Pare, S.; Kumar, A.; Bajaj, V. Image Segmentation Using Multilevel Thresholding: A Research Review. Iran. J. Sci. Technol. Trans. Electr. Eng. 2020, 44, 1–29. [Google Scholar] [CrossRef]
  17. Ji, D.S.; Zhang, W.B.; Zhao, Q.C. Correction and pointer reading recognition of circular pointer meter. Meas. Sci. Technol. 2023, 34, 025406. [Google Scholar] [CrossRef]
  18. Wang, Y.; Li, F. Correction of Structured Light Image Based on Improved Perspective Transform. Comput. Digit. Eng. 2019, 47, 1240–1248. [Google Scholar] [CrossRef]
  19. Hu, D.H.; Yan, K.; Xin, W.K.; Cao, Y.; Gan, H.M. Contour-based automatic perspective correction for circular meters. J. Electron. Meas. Instrum. 2023, 37, 32–39. [Google Scholar] [CrossRef]
  20. Chen, Z.H.; Tang, X.Y.; Lin, Z.Q.; Wei, H.A. Research and implementation of adaptive distortion image correction and quality enhancement algorithm. J. Comput. Appl. 2020, 40, 180–184. [Google Scholar] [CrossRef]
  21. Canny, J. A computational approach to edge detection. IEEE Trans. Pattern Anal. 1986, 8, 679–698. [Google Scholar] [CrossRef]
  22. Luo, Y.; Duraiswami, R. Canny edge detection on NVIDIA CUDA. In Proceedings of the Computer Vision and Pattern Recognition Workshop, Anchorag, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar] [CrossRef]
  23. Kyungkoo, J. Unsupervised Domain Adaptive Corner Detection in Vehicle Plate Images. Sensors 2022, 22, 6565. [Google Scholar] [CrossRef]
  24. He, D.; Liu, X.; Yin, Y.; Li, A.; Peng, X. Correction of Circular Center Deviation in Perspective Projection. In Proceedings of the Applications of Digital Image Processing, San Diego, CA, USA, 12–16 August 2012. [Google Scholar] [CrossRef]
  25. Zhang, Z.Y. A Flexible New Technique for Camera Calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
  26. Shreyamsha Kumar, B.K. Image denoising based on gaussian/bilateral filter and its method noise thresholding. Signal Image Video Process. 2012, 7, 1159–1172. [Google Scholar] [CrossRef]
  27. Mafi, M.; Martin, H.; Cabrerizo, M.; Andrian, J.; Barreto, A.; Adjouadi, M. A comprehensive survey on impulse and Gaussian denoising filters for digital images. Signal Process. 2019, 157, 236–260. [Google Scholar] [CrossRef]
  28. Mafi, M. Survey on mixed impulse and Gaussian denoising filters. IET Image Process. 2020, 14, 4027–4038. [Google Scholar] [CrossRef]
  29. Qi, Y.; Yang, Z.; Sun, W.; Lou, M.; Lian, J.; Zhao, W.; Deng, X.; Ma, Y. A Comprehensive Overview of Image Enhancement Techniques. Arch. Comput. Methods Eng. 2021, 29, 583–607. [Google Scholar] [CrossRef]
  30. Chen, S.; Beghdadi, A. Natural enhancement of color image. EURASIP J. Image Video Process. 2010, 2010, 175203. [Google Scholar] [CrossRef]
  31. Gao, F.; Wen, G. Affine invariant feature extraction using affine geometry. J. Image Graph. 2011, 16, 389–397. [Google Scholar]
  32. Hindman, N.; Moshesh, I. Image partition regularity of affine transformations. J. Comb. Theory 2007, 114, 51–53. [Google Scholar] [CrossRef]
  33. Wirtz, S.; Paulus, D.; Falkowski, K. Model-based recognition of 2D objects under perspective distortion. Pattern Recognit. Image Anal. 2012, 22, 72–79. [Google Scholar] [CrossRef]
Figure 1. Photos of the designed pipeline robot. (a) Wheel pipeline robot connection graph; (b) three-stage spiral shovel-cleaning pipeline robot.
Figure 1. Photos of the designed pipeline robot. (a) Wheel pipeline robot connection graph; (b) three-stage spiral shovel-cleaning pipeline robot.
Electronics 13 02898 g001
Figure 2. Projection models of four poses. (a) Ideal projection model; (b) projection model with angle offset; (c) projection model with displacement offset; and (d) projection model with angle and displacement offset.
Figure 2. Projection models of four poses. (a) Ideal projection model; (b) projection model with angle offset; (c) projection model with displacement offset; and (d) projection model with angle and displacement offset.
Electronics 13 02898 g002
Figure 3. Images of distortion marking. (a) The image with 8° angle offset; (b) the image with 8° angle offset and 15 mm displacement offset.
Figure 3. Images of distortion marking. (a) The image with 8° angle offset; (b) the image with 8° angle offset and 15 mm displacement offset.
Electronics 13 02898 g003
Figure 4. The projection model of the uneven illumination area.
Figure 4. The projection model of the uneven illumination area.
Electronics 13 02898 g004
Figure 5. The establishment process of the reference circle.
Figure 5. The establishment process of the reference circle.
Electronics 13 02898 g005
Figure 6. The extraction process of the feature points and mapping points.
Figure 6. The extraction process of the feature points and mapping points.
Electronics 13 02898 g006
Figure 7. The perspective correction flow chart.
Figure 7. The perspective correction flow chart.
Electronics 13 02898 g007
Figure 8. The ROI extraction process flow chart.
Figure 8. The ROI extraction process flow chart.
Electronics 13 02898 g008
Figure 9. Comparison of pipeline images before and after Gaussian filtering. (a) Original and (b) filtered image.
Figure 9. Comparison of pipeline images before and after Gaussian filtering. (a) Original and (b) filtered image.
Electronics 13 02898 g009
Figure 10. The process of image enhancement. (a) Image with enhanced contrast; (b) Mu = 0.005; and (c) Mu = 0.03.
Figure 10. The process of image enhancement. (a) Image with enhanced contrast; (b) Mu = 0.005; and (c) Mu = 0.03.
Electronics 13 02898 g010
Figure 11. The image segmentation process. (a) Image effect of global threshold segmentation method; (b) image effect of gradient direction processing method; and (c) effect of edge fitting.
Figure 11. The image segmentation process. (a) Image effect of global threshold segmentation method; (b) image effect of gradient direction processing method; and (c) effect of edge fitting.
Electronics 13 02898 g011
Figure 12. The process of extracting feature points. (a) Extraction of the inner circle contour; (b) establishment of the reference circle; (c) extraction process of feature points 1, 2 and mapping points 1′, 2′; and (d) extraction of feature points 3, 4, 5, 6 and mapping points 3′, 4 , 5 , 6 .
Figure 12. The process of extracting feature points. (a) Extraction of the inner circle contour; (b) establishment of the reference circle; (c) extraction process of feature points 1, 2 and mapping points 1′, 2′; and (d) extraction of feature points 3, 4, 5, 6 and mapping points 3′, 4 , 5 , 6 .
Electronics 13 02898 g012
Figure 13. Correction results of the distorted image. (a) the unwrapped image before correction; (b) The corrected pipeline image; and (c) the unwrapped image after correction.
Figure 13. Correction results of the distorted image. (a) the unwrapped image before correction; (b) The corrected pipeline image; and (c) the unwrapped image after correction.
Electronics 13 02898 g013
Figure 14. Marks on the chessboard circles.
Figure 14. Marks on the chessboard circles.
Electronics 13 02898 g014
Figure 15. Radius data of marks numbers 3 and 5. (a) Radius data of mark number 3 and (b) radius data of mark number 5.
Figure 15. Radius data of marks numbers 3 and 5. (a) Radius data of mark number 3 and (b) radius data of mark number 5.
Electronics 13 02898 g015
Table 1. Comparison of the correction method before and after processing under different displacement offset poses.
Table 1. Comparison of the correction method before and after processing under different displacement offset poses.
Offset
Condition
0 mm5 mm10 mm15 mm
UndisposedElectronics 13 02898 i001Electronics 13 02898 i002Electronics 13 02898 i003Electronics 13 02898 i004
ProcessingElectronics 13 02898 i005Electronics 13 02898 i006Electronics 13 02898 i007Electronics 13 02898 i008
Undisposed
contour
Electronics 13 02898 i009Electronics 13 02898 i010Electronics 13 02898 i011Electronics 13 02898 i012
Processed
contour
Electronics 13 02898 i013Electronics 13 02898 i014Electronics 13 02898 i015Electronics 13 02898 i016
Table 2. Comparison of correction method before and after processing under angle offset poses.
Table 2. Comparison of correction method before and after processing under angle offset poses.
Offset
Condition
UndisposedElectronics 13 02898 i017Electronics 13 02898 i018Electronics 13 02898 i019Electronics 13 02898 i020
ProcessingElectronics 13 02898 i021Electronics 13 02898 i022Electronics 13 02898 i023Electronics 13 02898 i024
Undisposed
contour
Electronics 13 02898 i025Electronics 13 02898 i026Electronics 13 02898 i027Electronics 13 02898 i028
Processed
contour
Electronics 13 02898 i029Electronics 13 02898 i030Electronics 13 02898 i031Electronics 13 02898 i032
Table 3. Comparison of correction method before and after processing under displacement and angle offset poses.
Table 3. Comparison of correction method before and after processing under displacement and angle offset poses.
Offset
Condition
0 mm and 0°5 mm and 8°10 mm and 8°15 mm and 8°
UndisposedElectronics 13 02898 i033Electronics 13 02898 i034Electronics 13 02898 i035Electronics 13 02898 i036
ProcessingElectronics 13 02898 i037Electronics 13 02898 i038Electronics 13 02898 i039Electronics 13 02898 i040
Undisposed
contour
Electronics 13 02898 i041Electronics 13 02898 i042Electronics 13 02898 i043Electronics 13 02898 i044
Processed
contour
Electronics 13 02898 i045Electronics 13 02898 i046Electronics 13 02898 i047Electronics 13 02898 i048
Table 4. Comparison of the correction method before and after processing of unwrapped images at different displacement offset poses.
Table 4. Comparison of the correction method before and after processing of unwrapped images at different displacement offset poses.
Offset ConditionUndisposed ImageProcessed Image
0 mmElectronics 13 02898 i049Electronics 13 02898 i050
5 mmElectronics 13 02898 i051Electronics 13 02898 i052
10 mmElectronics 13 02898 i053Electronics 13 02898 i054
15 mmElectronics 13 02898 i055Electronics 13 02898 i056
Table 5. Comparison of the correction method before and after processing of unwrapped images at different angle offset poses.
Table 5. Comparison of the correction method before and after processing of unwrapped images at different angle offset poses.
Offset ConditionUndisposed ImageProcessed Image
Electronics 13 02898 i057Electronics 13 02898 i058
Electronics 13 02898 i059Electronics 13 02898 i060
Electronics 13 02898 i061Electronics 13 02898 i062
Electronics 13 02898 i063Electronics 13 02898 i064
Table 6. Comparison of the correction method before and after processing of unwrapped images at displacements and angle offset poses.
Table 6. Comparison of the correction method before and after processing of unwrapped images at displacements and angle offset poses.
Offset ConditionUndisposed ImageProcessed Image
0 mm and 0°Electronics 13 02898 i065Electronics 13 02898 i066
5 mm and 8°Electronics 13 02898 i067Electronics 13 02898 i068
10 mm and 8°Electronics 13 02898 i069Electronics 13 02898 i070
15 mm and 8°Electronics 13 02898 i071Electronics 13 02898 i072
Table 7. Data before and after correction.
Table 7. Data before and after correction.
OffsetCenter Coordinates of the Circle before CorrectionCenter Coordinates of the Circle after CorrectionCorrection Rate
Inner CircleOuter CircleInner CircleOuter Circle
5 mm(351.0,598.0)(351.0,592.5)(350.6,598.0)(350.2,597.9)92.5 %
10 mm(363.5,593.5)(357.8,589.5)(364.0,594.2)(363.9,593.6)91.4 %
15 mm(340.0,595.0)(331.0,591.5)(341.1,595.9)(340.3,595.1)91.8 %
(427.5,602.5)(451.8,601.5)(424.2,602.6)(427.2,602.5)87.7 %
(246.5,627.5)(224.0,626.5)(250.7,627.8)(247.3,627.5)84.9 %
(298.0,599.0)(325.9,599.5)(292.5,598.8)(297.5,598.8)82.1 %
5 mm-8°(370.0,578.0)(359.4,575.5)(372.7,579.1)(371.0,578.6)83.5 %
10 mm-8°(376.0,596.0)(361.8,593.5)(379.2,596.8)(376.4,595.5)78.5 %
15 mm-8°(355.0,586.0)(345.0,586.5)(358.4,585.7)(355.7,586.2)73.0 %
Average deviation correction rate84.5 %
Table 8. Data of radius of each circle without deviation.
Table 8. Data of radius of each circle without deviation.
Radius Number1234567
Average radius114.14137.67149.05164.79185.15210.57241.46
Standard deviation0.410.480.360.450.360.430.43
Average relative error0%0%0%0%0%0%0%
Table 9. Radius data of displacement and offset pose.
Table 9. Radius data of displacement and offset pose.
Offset Situation5 mm10 mm15 mm
Radius number567567567
Average radius185.56210.34241.89185.56209.55242.11184.70208.10242.70
Standard deviation0.520.300.730.521.030.770.602.501.31
Average relative error0.25%0.11%0.23%0.25%0.47%0.27%0.24%1.17%0.51%
Table 10. Radius data of angle offset pose.
Table 10. Radius data of angle offset pose.
Offset Situation4 ° 6 ° 8 °
Radius number345345345
Average radius144.40171.93175.98141.76154.85172.14160.50178.61201.63
Standard deviation4.667.209.187.7510.1413.129.8813.8416.50
Average relative error3.12%4.20%4.95%4.90%6.03%7.02%6.20%7.78%8.23%
Table 11. Radius data of displacement and angle deviation.
Table 11. Radius data of displacement and angle deviation.
Offset SituationDisplacement and Angle
5 mm and 8°10 mm and 8°15 mm and 8°
Radius number123123123
Average radius114.14146.12159.38124.77151.12165.041128.61156.66168.87
Standard deviation6.538.4610.3410.7313.4616.0414.4818.0119.84
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, Z.; Zhou, J.; Li, X.; Xu, C.; Hu, X.; Wang, L. Correction Method for Perspective Distortions of Pipeline Images. Electronics 2024, 13, 2898. https://doi.org/10.3390/electronics13152898

AMA Style

Zhang Z, Zhou J, Li X, Xu C, Hu X, Wang L. Correction Method for Perspective Distortions of Pipeline Images. Electronics. 2024; 13(15):2898. https://doi.org/10.3390/electronics13152898

Chicago/Turabian Style

Zhang, Zheng, Jiazheng Zhou, Xiuhong Li, Chaobin Xu, Xinyu Hu, and Linhuang Wang. 2024. "Correction Method for Perspective Distortions of Pipeline Images" Electronics 13, no. 15: 2898. https://doi.org/10.3390/electronics13152898

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop