Next Article in Journal
Compressed Sensing Technique for the Localization of Harmonic Distortions in Electrical Power Systems
Previous Article in Journal
Generative Adversarial Networks and Data Clustering for Likable Drone Design
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automatic Extraction of Power Lines from Aerial Images of Unmanned Aerial Vehicles

1
Chinese Academy of Surveying & Mapping, Beijing 100036, China
2
School of Geomatics, Liaoning Technical University, Fuxin 123000, China
3
Faculty of Geomatics, Lanzhou Jiaotong University, Lanzhou 730070, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(17), 6431; https://doi.org/10.3390/s22176431
Submission received: 15 July 2022 / Revised: 22 August 2022 / Accepted: 23 August 2022 / Published: 26 August 2022
(This article belongs to the Section Sensing and Imaging)

Abstract

:
Automatic power line extraction from aerial images of unmanned aerial vehicles is one of the key technologies of power line inspection. However, the faint power line targets and complex image backgrounds make the extraction of power lines a greater challenge. In this paper, a new power line extraction method is proposed, which has two innovative points. Innovation point one, based on the introduction of the Mask RCNN network algorithm, proposes a block extraction strategy to realize the preliminary extraction of power lines with the idea of “part first and then the whole”. This strategy globally reduces the anchor frame size, increases the proportion of power lines in the feature map, and reduces the accuracy degradation caused by the original negative anchor frames being misclassified as positive anchor frames. Innovation point two, the proposed connected domain group fitting algorithm solves the problem of broken and mis-extracted power lines even after the initial extraction and solves the problem of incomplete extraction of power lines by background texture interference. Through experiments on 60 images covering different complex image backgrounds, the performance of the proposed method far exceeds that of commonly used methods such as LSD, Yolact++, and Mask RCNN. DSCPL, TPR, precision, and accuracy are as high as 73.95, 81.75, 69.28, and 99.15, respectively, while FDR is only 30.72. The experimental results show that the proposed algorithm has good performance and can accomplish the task of power line extraction under complex image backgrounds. The algorithm in this paper solves the main problems of power line extraction and proves the feasibility of the algorithm in other scenarios. In the future, the dataset will be expanded to improve the performance of the algorithm in different scenarios.

1. Introduction

In recent years, the scale of China’s power grid has been rising and line complexity has been increasing, bringing huge challenges to grid inspection. Inspection work is extremely important to maintain the safe and stable operation of the regional power grid. Commonly used electric power inspection methods at home and abroad include the five methods of traditional manual inspection, manned helicopter inspection, robot inspection, satellite inspection, and UAV inspection. Among them, UAV inspection has the advantages of low cost and high efficiency compared with other methods and has been widely used.
The accurate extraction of power lines is one of the core steps of UAV inspection [1,2] and, as the basis for distance monitoring [3,4,5], broken strand monitoring [6,7,8], ice cover monitoring [9,10,11], foreign object monitoring [12,13], and power line arc sag measurement [14,15,16] of hazardous operations, it is important to extract power lines accurately and quickly. Laser point cloud data have high accuracy and high-density 3D spatial information [17,18,19,20] and are used by some scholars for fine power line extraction; the literature [21] realizes single power line fine extraction based on the residual clustering method. However, this type of method is costly and not suitable for large-scale power rapid inspection. In contrast, visible image inspection is low cost, has intuitive data, and has high engineering application value for power line inspection [22,23]. The current research on visible image power line extraction can be broadly divided into three categories: power line extraction based on edge detection operator, power line extraction based on joint features, and power line extraction based on deep learning.
Edge detection operator-based power line extraction methods are divided into two categories based on whether a priori knowledge is introduced or not. One class combines the brightness and the direction of the power lines as a priori knowledge with the edge detection operator for power line extraction [24,25,26,27,28,29]; the other class extracts the power lines in the edge map directly after the edge detection operator acquires the edge map using the Hough transform (Hough) [30,31]. Although these methods have a simple structure, they ignore the problem of interference noise suppression. The joint feature-based power line extraction algorithm uses objects that coexist with power lines, such as towers and insulators, as auxiliary features for power line extraction. The literature [32] defines line–tower spatial association features based on the spatial relationship between power lines and towers and combines the line features with spatial contextual information in the area around the lines to achieve power line extraction. The literature [33] uses the corner points of electric poles as the height basis for line extraction to extract power lines. This type of algorithm has a large limitation, the algorithm generalization ability is limited, and the algorithm performance decreases rapidly when the test image does not match the predefined association model.
With the great success of deep learning in the fields of image classification and image segmentation [34,35], more and more scholars have started to focus on the research of implementing visible image power line extraction based on deep learning [36]. The literature [37] used a convolutional neural network (CNN) to classify the edge detector detection map and to perform power line fitting by Hough transform. The literature [38] extracted features using CNN and used a PCA (principal component analysis) algorithm for dimensionality reduction and classification by a generic machine learning algorithm. The literature [39] proposed a fast power line detection network (fast PLDN) that efficiently extracts the spatial and semantic information of power lines using low high-pass filters and edge attention fusion modules. However, the literature [37,38] can only achieve to determine whether the image block contains power lines and cannot accurately locate the power lines in the graph. The literature [39] does not effectively resolve interference from other linear targets and exhibits less than optimal performance for closely spaced adjacent power lines. Jayavardhana et al. [40] proposed a classifier to determine “Line present” and “No line present”. Van Nhan Nguyen et al. [36] proposed a fully convolutional feature extractor, classifier, and line segment regressor to achieve a real-time detection of power lines with the VGG-16 network as the backbone. However, these methods obtain some false positive pixels when extracting power lines, which make the power lines thicker or longer.
In addition, the following characteristics of power lines make it difficult to extract power lines from UAV aerial images based on deep learning. First, power lines are faint targets in aerial images, with a small relative pixel share. Secondly, power lines often pass across or vertically through the whole image. Third, the power lines have a thin and long physical structure, especially in the vertical power line direction, which accounts for only a few pixels in width. Fourth, the power corridor environment is complex and the image background is variable, often accompanied by interference targets with similar characteristics to the power lines.
To address the above problems, a new automatic power line extraction algorithm is proposed in this paper. The algorithm uses Mask RCNN [41], which has the top segmentation performance at present, as a deep learning network framework, adopts the strategy of block processing for the initial extraction of power lines, and uses the connected domain group fitting algorithm to solve the problems of power line breakage and mis-extraction after initial extraction of power lines. The algorithm has the following advantages.
(1)
Using the block processing strategy, the anchor frame size is reduced globally to increase the proportion of power lines in the feature map and to reduce the accuracy degradation caused by the original negative anchor frames being misclassified as positive anchor frames.
(2)
Further processing of the initially extracted power lines uses the connected domain group fitting algorithm to solve the problem of power line breakage and mis-extraction.
(3)
Compared with the traditional Mask RCNN method, the extraction accuracy, precision, and anti-interference performance of the algorithm in this paper are greatly improved.

2. Related Work

2.1. Data Acquisition

The measurement area was selected in the line from tower 09 to tower 37 of Yangu Line in Daling Village, Dapu Town, Hengdong County, Hunan Province, China, adjacent to the village with no tall buildings around. As shown in Figure 1, the measurement area covers a variety of land types such as forests, grasslands, cultivated land, roads, houses and ponds, and water, reflecting more comprehensively the complex background of the UAV aerial photography power inspection images.
When collecting data, the drone flew directly above the power line, setting the heading overlap rate to 70% and the cruising speed to 50 km/h. It flew in one direction from pole tower 09 to pole tower 37, shooting every 5 s.

2.2. Operation Equipment

The DJI Genie Phantom 4RTK UAV is a small multi-rotor high-precision aerial survey drone for low-altitude photogrammetry applications with a centimeter level navigation and positioning system and a high-performance imaging system that is portable and easy to use. The DJI Genie Phantom 4RTK consists of an aerial vehicle, a remote control, a gimbal camera, and the accompanying DJIGS RTK App. The aircraft parameters are shown in Table 1.

2.3. Dataset Production

In this paper, we use a chunking strategy to crop the images into suitable sizes before inputting them into the model, so when we make the dataset we also crop the images into suitable sizes in advance. As shown in Figure 2, the original size of 3840 × 2160 images is cut into 4 × 4 equal parts, each with the size of 960 × 540, and the images containing power lines are selected for manual labeling. In this paper, we use the Labelme annotation tool to completely fit the powerline contours and we use the polygon annotation method to annotate the powerline contours in the inspection images, which is called powerline. A total of 300 images containing powerlines are selected for annotation in this dataset, and the size of each image is 960 × 540, which is cut into the training and test sets according to the ratio of 7:3.

3. Methodology

In this section, we will describe our proposed method in detail; however, before that we will first briefly describe the process of this method. The structure of the proposed method is shown in Figure 3. It mainly includes four steps: (1) first, we will segment the image extracted from the UAV Power Patrol video and number each block; (2) second, each block is sequentially input into the Mask RCNN model to extract power lines and then spliced to obtain power line masks; (3) finally, a connected domain grouping fitting algorithm is proposed to further process the initially extracted power lines to obtain a complete power line.

3.1. Power Patrol Image Chunking

The size of the image extracted from the UAV Power Patrol video is large and the power line accounts for less in the image due to its slender physical structure. Often, a single power line accounts for only about 0.1% of the image, which brings great difficulties to the subsequent power line extraction tasks. In addition, the power line often crosses or vertically crosses the whole image, resulting in a high probability of misjudging positive samples in the RPN (risk priority number) [42,43] link of Mask RCNN. Based on this, the algorithm adopts a block processing strategy, which reduces the size of the anchor frame globally to improve the proportion of power lines in the feature map. At the same time, the strategy can also reduce the misjudgment of negative samples as positive samples in RPN to improve the accuracy of power line extraction.
Firstly, the UAV power inspection video is a frame extracted according to the overlap of 60%, and the power inspection image with the size of hxw is obtained. The image is cut into nxm copies and each copy is numbered. If the edge of the image is not enough to be divided into one, this part is filled with 0.

3.2. Mask RCNN Preliminary Extraction of Power Line

Mask RCNN is an instance segmentation network model developed by He et al., based on Faster RCNN [44], which can accomplish the task of semantic segmentation with high accuracy while accomplishing target recognition; it is one of the best current techniques in object recognition and object segmentation [45,46]. The algorithm uses a residual network (ResNet) [47,48,49,50] as the backbone network and extracts feature elements at multiple scales by constructing top-down feature pyramid networks (FPN) [51,52,53]. Regions of interest (RoI) are selected using region proposal networks (RPN), and a fixed-size feature map is generated using the RoI Align method and input into the fully connected network. Finally, fully connected classification, bounding box regression, and mask regression are used to achieve instance segmentation of the target, and the structure of the Mask RCNN network is shown in Figure 4.
The segmented local power inspection images are successively input into Mask RCNN for power line extraction. The key link for Mask RCNN to extract power lines is RPN. This link extracts the negative sample area containing the background and the positive sample area containing the target information from the image and inputs the same amount of positive and negative samples into the subsequent network to ensure the balance of positive and negative samples in the data. Usually, the IoU (intersection over union) of the prediction box and the truth box is calculated to judge the positive and negative samples according to the threshold set in advance. Because the power line has a thin and long physical structure and often crosses or crosses the whole image, the problem of framing the prediction that meets the IoU threshold but does not include the power line into positive samples often occurs in the extraction of positive samples. As shown in Figure 5, A is the truth box, B is the prediction box, and the IoU calculation formula of A and B is:
IoU AB = A B A B
At this time, the IoU of A and B satisfy the threshold, but the prediction frame B does not contain power lines, and such cases will be misclassified as positive samples in the RPN session. To reduce the occurrence of this type of problem, this paper adopts a chunking strategy to reduce the anchor box size globally. When the anchor frame size is reduced, the true value frame A will be significantly reduced, the proportion of power lines in the true value frame will be increased, and the intersection of the prediction frame B and the true value frame A without power lines will also be significantly reduced, which largely avoids the occurrence of positive sample misclassification in the RPN session and improves the accuracy of Mask RCNN in extracting power lines.
At the same time, the strategy of chunking can increase the pixel share of power lines in the map and reduce the difficulty of power line extraction. When the original image size of the aerial image is 3840 × 2160, the pixel share of a single power line is less than 0.1% and the target is extremely weak, which makes the feature extraction work more difficult. When the original image is divided into 4 × 4 chunks, the image size is 960 × 540 and the pixel share of a single power line reaches 0.5%, which is about 5 times higher and greatly reduces the difficulty of power line extraction.
The power patrol image is chunked and input into Mask RCNN to extract power lines, obtain local power line masks, stitch the local power line masks according to the number in turn, remove the filled part, and finally obtain the original image size power lines.

3.3. Connected Domain Group Fitting Algorithm

The power lines extracted by Mask RCNN inevitably have breakages and mis-extractions, which need to be further processed. The Hough transform [54,55,56,57], as a classical linear detection algorithm, is often used for linear object detection. However, the time complexity and space complexity of this algorithm is high and the results are often less than ideal. To address these problems, the connected domain group fitting algorithm (CDGFA) is proposed in this paper. The power lines are grouped using the connected domain, and the least squares algorithm is used to fit the power lines within the connected domain component. Finally, multiple line segments on the same power line are connected according to the feature of basic parallelism between power lines, and the mis-extracted targets are eliminated. The algorithm flow is shown in Figure 6.
Step 1: Perform morphological corrosion operation and morphological expansion operation [58,59,60] on Mask RCNN extraction results to eliminate noise points and bridge small gaps.
Step 2: connected domain analysis [61,62] is performed on all power lines and labels are attached. It is roughly divided into two scans. First scan: For each foreground pixel encountered, determine whether the surrounding pixels already have labels and, if so, leave them unchanged, if not, tag new labels. If there are two labels adjacent to each other, the adjacency relationship is recorded. The second scan: when a foreground pixel with a label is encountered, it is relabeled according to the adjacency relationship until each connected domain has its label. As shown in Figure 7, each connected domain has its label, and each color represents one label.
Step 3: The slope and length of each connected domain are calculated by the least squares algorithm to fit the power line. The equation of the fitted straight line for the connected domain is that the difference between the fitted value and the sample value is the error, and the sum of squares of the error measures the total error.
J ( a , b ) = k = 1 m ( y k a x k b ) 2
For the error function, there is an extreme value when its derivative is 0. Therefore, find the partial derivative of the error function and make it 0.
J a = k = 1 m 2 x k ( y k a x k b ) = 0 J b = k = 1 m 2 ( y k a x k b ) = 0
Appointment S ( x y ) = x k y k , S 2 ( x 2 ) = x 2 k , S ( x ) = x k , E ( x ) = 1 m x k , E ( y ) = 1 m y k . Then we can obtain Equation (4):
a = S ( x y ) E ( y ) S ( x ) S ( x 2 ) E ( x ) S ( x ) b = E ( y ) S ( x 2 ) S ( x y ) E ( x ) S ( x 2 ) E ( x ) S ( x )
The slope k and area s of each connected domain are taken out by the connected domain label, where the connected domain with the largest area L, the slope of that connected domain is K. If the i-th connected domain satisfies | K k i | > t a n   5 or s i < 500 , eliminate this connectivity domain. This step eliminates the connected domains that do not contain power lines.
Step 4: Group the connected domains and fit each group of power lines. The distance from the midpoint of each connected domain to the largest connected domain L in the area is calculated and added to the set D. The set D is sorted in ascending order. Subsequently, the connected domains are grouped according to set D. When the neighboring distance connected domains in the set D satisfy | d i + 1 d i | < d t h , divide them into the same group. Finally, the coordinates of all the pixel points in the connected domain of each group are taken out using the connected domain labels, and the power lines of each group are fitted based on the least squares algorithm based on these coordinate points.

4. Experimental Results

4.1. Model Training

The power line automatic extraction algorithm model is built according to the method proposed in this paper and the self-built power line dataset is converted to COCO to train the network with the required dataset. The pre-training file uses mask_rcnn_R_50_FPN_3x, the ResNet50/101+FPN model is used as the backbone network, and the hyperparameters of the model are obtained by genetic algorithm. The initial hyperparameters are shown in Table 2.

4.2. Experimental Data

Sixty images from three different lines were selected from the UAV power inspection videos to create a test dataset, as shown in Figure 8. These 60 images are all 3820 × 2160 in size and include a variety of complex image backgrounds, including water, woodland, and grass, which have a large color contrast with the power lines, as well as arable land, which has little color difference from the power lines. In addition, there are images with a large number of linear features such as roads and houses as backgrounds. These image backgrounds in each scene reflect the main challenges of power line extraction: (1) faint power line targets; (2) power line colors similar to image backgrounds and power lines blending into image backgrounds; (3) interference of linear features. If the algorithm performs well on this test dataset, it proves that the algorithm can solve the main challenges of power line extraction and has the potential to extract power lines in various complex scenes.

4.3. Evaluation Parameters

Almost all studies on powerline detection use dice score (DSC) (also known as F-score) [63], precision, true positive rate (TPR) (also known as recall or sensitivity), false discovery rate (FDR), and accuracy. These assessment parameters are defined as:
DSC or F-score = 2TP/(2TP + FP +FN)
Precision = TP/(TP + FP)
TPR or Recall or Sensitivity = TP/(TP + FN)
FDR = FP/(FP + TP)
Accuracy = (TP + TN)/(TP + TN + FP + FN)
where TP denotes the number of pixels with both predicted and actual power lines; TN denotes the number of pixels with neither predicted nor actual power lines; FP denotes the number of pixels with predicted power lines and actual background; FN denotes the number of pixels with predicted background and actual power lines. Accuracy refers to the ratio of the number of correctly predicted samples to the total number of predicted samples and it does not consider whether the predicted samples are positive or negative. Precision refers to the ratio of the number of correctly predicted positive samples to the number of all predicted positive samples, i.e., how many of all predicted positive samples are positive samples. The recall is the ratio of the number of correctly predicted positive samples to the total number of true positive samples, i.e., how many positive samples can be correctly identified from these samples. FDR refers to the percentage of failure detection in the foreground power line class.

4.4. Comparison with Other Methods

In this paper, three methods are chosen to compare with our approach. The first one is the Line Segment Detector (LSD) [64,65,66], which is a classical line segment detection method that detects line segments based on the change of pixel gradient; therefore, it is often used for linear target detection tasks, such as detecting power lines. The second method is Yolact++ [67], which belongs to the one-stage model and is faster compared with Mask RCNN, but slightly less accurate. The literature [68] tested the self-built transmission tower and power line dataset (TTPLA) on its basic version (Yolact [69]) and achieved good results. The third method is Mask RCNN [70,71], which belongs to the two-stage model and is currently one of the best methods for instance segmentation accuracy.
These methods are highly feasible for achieving the power line detection task and are some of the most advanced methods available, and the comparison of the method in this paper with these methods can prove the advanced nature of the method in this paper. When we use the Yolact++ method, we choose Resnet101-FPN, which has the best segmentation accuracy, as the backbone network. Because the image size has a large impact on the segmentation performance of the method, we found after experimental tests that the method performs best on our experimental data when the image size is 480 × 270. To facilitate highlighting the effectiveness of the chunking strategy proposed by the method in this paper, the original image size (3840 × 2160) is used in this paper when using the Mask RCNN method.
Table 3 shows the results of comparing the five performance metrics of these three methods with the method of this paper, namely dice score (DSC), true positive rate (TPR), false discovery rate (FDR), and precision (Precision), and accuracy (Accuracy).
As can be seen from the results in Table 3 and Figure 9, LSD is sensitive to linear features and can accurately detect power lines in various scenes. However, the method cannot effectively determine power lines and non-power lines and contains a large number of incorrectly detected targets in the detection results. In the five sets of scene detection results in Figure 9, LSD detects most of the power lines, but unfortunately a large number of non-power lines in the image background are also extracted. For example, the electric towers in groups (a) and (b) and the houses in groups (c), (d), and (e) are detected together. Compared with LSD, Yolact++ performs much better. In Table 3, the TPR of this method reaches 82.10, which is the highest value among the four methods, indicating that Yolact++ is able to detect power lines well from various image backgrounds. However, the FDR value of this method is as high as 64.79, which is also the maximum among the four methods, indicating that the mis-extracted rate of this method is also relatively high. Specifically, in the five sets of results in Figure 9, Yolact++ detects most of the power lines, but the features similar to power lines in various scenes, such as houses and roads, bring greater interference to the power line extraction task. In particular, a large number of mis-extracted results are included in the detection results in the group (c). In this experiment, the training dataset is only about 200 images and the amount of data is relatively small, resulting in less mature training of the Mask RCNN network model, so the results of this method are not satisfactory in this experiment. If the training dataset is supplemented, the performance of the method has room for further improvement.
In contrast, the method in this paper performs very well in all aspects. The DSCPL, precision, and accuracy of the method are 73.95, 69.28, and 99.15, respectively, which are the highest values, and the FDR is only 30.72, which is the lowest value. In addition, the difference between the TPR value and the highest value is only 0.35. This indicates that the method in this paper not only extracts power lines accurately and correctly but also avoids most mis-extracted cases. Specifically, in the five sets of results in Figure 9, we can know that the method in this paper can extract every power line almost completely, no matter whether it is a complex scene such as (c) or a simple scene such as (e), no matter whether it is (b) where the power lines are contrasted with the image background or (a) where the power lines are integrated into the image background. Although there are inevitably a small number of broken power lines and mis-extractions, the performance demonstrated by the method in this paper is sufficient for practical engineering needs.

5. Discussions

5.1. Effectiveness of Chunking Strategies and the Impact of Chunk Size on Performance

The algorithm in this paper implements the initial extraction task of power lines with the Mask RCNN network framework. However, the power lines are weakly targeted in the images, which makes the extraction task more difficult. Based on this, this paper proposes a chunking extraction strategy to solve the problem. To demonstrate the effectiveness of the chunking strategy and to investigate the effect of chunk size on the performance of the algorithm, we crop 60 images in the test dataset into 480 × 270, 960 × 540, 1920 × 1080, and 3840 × 2160 sizes, respectively, into Mask RCNN for the power line extraction task. Table 4 shows the average values of each performance parameter under different chunk sizes.
According to Table 4 and Figure 10, we can know that the algorithm exhibits different performances when the image block size varies. When the chunk size is 3840 × 2160, Mask RCNN has the worst performance parameters in each aspect and the most drastic fluctuations in each performance parameter on the test dataset. When the chunk size is 480 × 270 and 1920 × 1080, the performance of Mask RCNN improves and the trend of the variation of the performance parameters is relatively stable. When the chunk size is 960 × 540, Mask RCNN achieves the strongest comprehensive performance with DSCPL, TPR, and accuracy of 63.26, 86.31, and 98.94, respectively, which are the highest values. Moreover, the FDR and accuracy are similar for this size compared with the 1920 × 1080 data block size. Therefore, this indicates that Mask RCNN achieves the best performance on this test dataset when the data block size is 960 × 540.
The performance comparison of Mask RCNN at chunk size 960 × 540 and the original image size 3840 × 2160 shows that DSCPL, TPR, precision, and accuracy are improved by 16.10, 29.26, 7.90, and 2.13, respectively, and FDR is reduced by 6.23. It can be seen that Mask RCNN at chunk size has a significant improvement in power line extraction performance under the chunk size strategy, which can illustrate the effectiveness of the chunking strategy proposed in this paper.
Specifically, as the two sets of extraction results in Figure 11 show, when the chunk size is 3840 × 2160, Mask RCNN cannot extract power lines with a similar color to the image background, and the power lines extracted at this site are accompanied by a large number of false positive pixels, resulting in thicker power lines. When the chunk size is 1920 × 1080 and 480 × 270, Mask RCNN extracts power lines much better and is able to extract most of the power lines. However, there are still some broken and incomplete extraction cases. When the chunk size is 960 × 540, Mask RCNN has the best performance in extracting power lines and can basically extract the power lines completely. This shows the effectiveness of the proposed chunking strategy in helping Mask RCNN to extract power lines, and the best performance of Mask RCNN in extracting power lines on this test dataset is demonstrated by experiments when the chunk size is 960 × 540.

5.2. Performance of the Connected Domain Group Fitting Algorithm

After the initial extraction of power lines by Mask RCNN, there are inevitably some broken and misleading cases, which cause greater interference to the power line extraction work. Based on this, a connected domain group fitting algorithm is proposed in this paper to further process the initially extracted power lines and to achieve complete and accurate extraction of power lines in complex image backgrounds by the algorithm in this paper. We input the 60 images in the test dataset with a block size of 960 × 540 into Mask RCNN, and Mask RCNN combined with the connected domain group fitting algorithm to obtain the algorithm in this paper; the extraction results obtained are shown in Table 5.
According to Table 5, we can see that the performance of the mask RCNN is significantly improved by adding the connection domain group fitting algorithm. The DSCPL, accuracy, and precision are increased by 10.69, 18.33, and 0.21, respectively, and the FDR is reduced by 18.33. In addition, although the TPR is smaller than the mask RCNN method, the difference is only 4.56. It can be seen that the connection domain proposed in this paper group fitting algorithm can effectively solve the fault and mis-extracted problems after the extraction of power lines by Mask RCNN and can effectively improve the performance of this algorithm.
Figure 12 shows the comparison of the two methods to solve the power line breakage problem. Comparing the results of the two methods for extracting power lines, it can be seen that the Mask RCNN method cannot solve the power line breakage problem, while the method in this paper can solve this problem. This indicates that the connected domain group fitting algorithm in this paper can solve the power line breaking problem after the initial extraction of power lines.
Figure 13 shows the comparison of two methods to solve the power line error detection problem. From the figure, we can know that there are two error detections in the extraction result of Mask RCNN. In the method of this paper, these two error detections are eliminated. This indicates that the connected group fitting algorithm in this paper can solve the mis-extracted problem after the initial extraction of power lines.
In summary, we can know that the connected domain grouping fitting algorithm in this paper can effectively solve the power line breakage and mis-extracted problems that exist after the initial extraction of power lines and can improve the performance of the algorithm in this paper. From the experiments, we can also know that the performance of the method in this paper has a great advantage over other classical power line extraction methods [64,68,70] such as the traditional Mask RCNN.

6. Conclusions

In this paper, we propose a new automatic power line extraction algorithm, which is divided into two parts: the initial extraction of power lines based on Mask RCNN using a chunk extraction strategy; the further processing of power lines using a connected domain group fitting algorithm to extract the complete power lines. The algorithm in this paper mainly solves the following problems:
(1)
To address the problems of power lines running through the whole map and the difficulty of extraction due to the faint target, this paper adopts a chunking extraction strategy to globally reduce the anchor frame size to increase the proportion of power lines in the feature map. In addition, this strategy can reduce the accuracy degradation caused by the original negative anchor frame being misclassified as a positive anchor frame.
(2)
The proposed connected group fitting algorithm can effectively solve the problems of breakage and mis-extraction after the initial extraction of power lines.
Three commonly used methods, LSD, Yolact++, and Mask RCNN, are used to conduct comparison experiments with this paper’s method based on 60 test datasets of power lines covering a variety of scenes. The performance parameters of the algorithm in this paper are optimal except for TPR which is 81.75, only 0.35 from the highest value, DSCPL, precision, and accuracy are 73.95, 69.28, and 99.15, respectively, and FDR is only 30.72. The experimental results show that the algorithm in this paper has excellent performance, can solve the main problems of extracting power lines, and can realize the task of power line extraction in the complex image background. The experimental results show that this algorithm has excellent performance and can solve the main challenges of power line extraction. In addition, two innovations proposed in this paper, the chunk extraction strategy and the connected domain group fitting algorithm, have also proved their effectiveness through experiments.
In conclusion, the algorithm in this paper can solve the main problems of power line extraction in UAV aerial images and the algorithm has superior performance and high engineering application value.

Author Contributions

Conceptualization, J.S. and Y.L.; data management, J.S.; formal analysis, J.S.; funding acquisition, Y.L. and Z.L.; investigation, J.S.; methodology, J.S.; project management, Y.L. and Z.L.; resources, Z.L.; software, J.S. and Y.C.; supervision, J.Q. and Y.L.; validation, J.S. and Y.L.; visualization, J.S.; writing—original draft, J.S.; writing—review and editing, J.S., J.Q., Y.L., Y.C. and J.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Funded Project of Fundamental Scientific Research Business Expenses of the Chinese Academy of Surveying and Mapping (AR2104; AR2203); Geohazard Monitoring and Data Processing Based on Precision Mapping Technology (AR2118); Overall Design of Intelligent Mapping System and Research on Several Technologies (A2201).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data in this study are owned by the research group and will not be transmitted.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pouliot, N.; Richard, P.; Montambault, S. LineScout Technology Opens the Way to Robotic Inspection and Maintenance of High-Voltage Power Lines. IEEE Power Energy Technol. Syst. J. 2015, 2, 1–11. [Google Scholar] [CrossRef]
  2. Berni, J.A.J.; Zarco-Tejada, P.J.; Suarez, L.; Fereres, E. Thermal and Narrowband Multispectral Remote Sensing for Vegetation Monitoring From an Unmanned Aerial Vehicle. IEEE Trans. Geosci. Remote Sens. 2009, 47, 722–738. [Google Scholar] [CrossRef]
  3. Li, C. Extraction of High Voltage Line Corridor Features and Elevation Calculation Study. Ph.D. Thesis, Beijing University of Posts and Telecommunications, Beijing, China, 2006. [Google Scholar]
  4. Mu, C. Research on the Extraction Method of Power Line Corridor Features Based on Multiple Remote Sensing Data. Ph.D. Thesis, Wuhan University, Wuhan, China, 2010. [Google Scholar]
  5. Zhang, Y. Research on UAV Low Altitude Photogrammetry Method for Overhead Transmission Line Obstacle Inspection. Ph.D. Thesis, Wuhan University, Wuhan, China, 2017. [Google Scholar]
  6. Zhang, Z.; Zhang, W.; Li, Z.; Xiao, Y.; Deng, J.; Xia, G. UV imaging inspection of wires with different defect types. Grid Technol. 2015, 39, 2647–2652. [Google Scholar] [CrossRef]
  7. Yan, Y.; Sheng, G.; Chen, Y.; Guo, Z.; Du, X.; Wang, Q. Construction of key parameters system for transmission line condition evaluation based on association rules and principal component analysis. High Volt. Technol. 2015, 41, 2308–2314. [Google Scholar] [CrossRef]
  8. Jiang, X.; Xia, Y.; Zhang, Z.; Hu, J.; Hu, q. Transmission conductor broken strand image detection based on optimized Gabor filter. Power Syst. Autom. 2011, 35, 78–83. [Google Scholar]
  9. Hu, Q.; Yu, H.; Xu, X.; Shu, L.; Jiang, X.; Qiu, g.; Li, H. Analysis of ice-cover torsional characteristics of split conductors and calculation of equivalent ice-cover thickness. Grid Technol. 2016, 40, 3615–3620. [Google Scholar] [CrossRef]
  10. Hu, Q.; Yu, H.; Li, Y.; Shu, L.; Jiang, X.; Liang, j. Simulation calculation and experimental verification of ice-cover growth of split conductors. High Volt. Technol. 2017, 43, 900–908. [Google Scholar] [CrossRef]
  11. Zhang, X.; Wang, Y.; He, D.; Guo, C.; Chen, Y.; Li, M. Analysis of the current situation of power grid disaster prevention and mitigation and suggestions. Grid Technol. 2016, 40, 2838–2844. [Google Scholar] [CrossRef]
  12. Shi, J.; Li, Z.; Gu, C.; Sheng, G.; Jiang, X. Faster R-CNN based sample expansion for foreign object monitoring in power grids. Grid Technol. 2020, 44, 44–51. [Google Scholar] [CrossRef]
  13. Lu, J.; Zhou, T.; Wu, C.; Li, B.; Tao, Y.; Zhu, Y. Fault statistics and analysis of transmission lines of 220kV and above in a provincial power grid. High Volt. Technol. 2016, 42, 200–207. [Google Scholar] [CrossRef]
  14. Tong, Q.; Li, B.; Fan, J.; Zhao, S. Transmission line arc sag measurement method based on aerial sequence images. Chin. J. Electr. Eng. 2011, 31, 115–120. [Google Scholar] [CrossRef]
  15. Chen, X.; Wang, Y.; Huang, H.; Zhang, G.; Ma, Y. A new model for calculating the safety limit distance of overhead transmission lines. Power Sci. Eng. 2015, 31, 60–65. [Google Scholar]
  16. Lu, Q.; Chen, W.; Wang, W. Research on arc sag measurement of transmission lines based on aerial images. Comput. Appl. Softw. 2019, 36, 108–111. [Google Scholar]
  17. Mongus, D.; Brumen, M.; Žlaus, D.; Kohek, Š.; Tomažič, R.; Kerin, U.; Kolmanič, S. A Complete Environmental Intelligence System for LiDAR-Based Vegetation Management in Power-Line Corridors. Remote Sens. 2021, 13, 5159. [Google Scholar] [CrossRef]
  18. Wu, B.; Yu, B.; Huang, C.; Wu, Q.; Wu, J. Automated extraction of ground surface along urban roads from mobile laser scanning point clouds. Remote Sens. Lett. 2016, 7, 170–179. [Google Scholar] [CrossRef]
  19. Li, X.; Wang, R.; Chen, X.; Li, Y.; Duan, Y. Classification of Transmission Line Corridor Tree Species Based on Drone Data and Machine Learning. Sustainability 2022, 14, 8273. [Google Scholar] [CrossRef]
  20. Guan, H.; Yu, Y.; Li, J.; Liu, P.; Zhao, H.; Wang, C. Automated extraction of manhole covers using mobile LiDAR data. Remote Sens. Lett. 2014, 5, 1042–1050. [Google Scholar] [CrossRef]
  21. Ma, W.; Wang, C.; Wang, J.; Zhou, J.; Ma, Y. Residual clustering method for fine extraction of laser point cloud transmission lines. J. Surv. Mapp. 2020, 49, 883–892. [Google Scholar]
  22. Yang, L.; Fan, J.; Huo, B.; Li, E.; Liu, Y. PLE-Net: Automatic power line extraction method using deep learning from aerial images. Expert Syst. Appl. 2022, 198, 116771. [Google Scholar] [CrossRef]
  23. Zhao, L.; Yao, H.; Tian, M.; Wang, X. Robust power line extraction from aerial image using object-based Gaussian–Markov random field with gravity property parameters. Signal Process. Image Commun. 2022, 103, 116634. [Google Scholar] [CrossRef]
  24. Zhu, L.; Cao, W.; Han, J.; Du, Y. A double-side filter based power line recognition method for UAV vision system. In Proceedings of the 2013 IEEE International Conference on Robotics and Biomimetics (ROBIO), Shenzhen, China, 12–14 December 2013; pp. 2655–2660. [Google Scholar]
  25. Yan, G.; Li, C.; Zhou, G.; Zhang, W.; Li, X. Automatic Extraction of Power Lines From Aerial Images. IEEE Geosci. Remote Sens. Lett. 2007, 4, 387–391. [Google Scholar] [CrossRef]
  26. Li, Z.; Liu, Y.; Hayward, R.; Zhang, J.; Cai, J. Knowledge-based power line detection for UAV surveillance and inspection systems. In Proceedings of the 2008 23rd International Conference Image and Vision Computing New Zealand, Christchurch, New Zealand, 26–28 November 2008; pp. 1–6. [Google Scholar]
  27. Shuai, C.; Wang, H.; Zhang, G.; Kou, Z.; Zhang, W. Power Lines Extraction and Distance Measurement from Binocular Aerial Images for Power Lines Inspection Using UAV. In Proceedings of the 2017 9th International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC), Hangzhou, China, 26–27 August 2017; pp. 69–74. [Google Scholar]
  28. Li, Z.; Liu, Y.; Walker, R.; Hayward, R.; Zhang, J. Towards automatic power line detection for a UAV surveillance system using pulse coupled neural filter and an improved Hough transform. Mach. Vis. Appl. 2010, 21, 677–686. [Google Scholar] [CrossRef]
  29. Zhao, L.; Wang, X.; Dai, D.; Long, J.; Tian, M.; Zhu, G. Automatic power line extraction algorithm in complex background. High Volt. Technol. 2019, 45, 218–227. [Google Scholar] [CrossRef]
  30. Li, C.; Feng, Z.; Deng, X.; Han, L. Power line extraction method in complex feature background. Comput. Eng. Appl. 2016, 52, 198–202. [Google Scholar]
  31. Tan, L.; Wang, Y.; Shen, C. Transmission line de-icing robot obstacle visual detection recognition algorithm. J. Instrum. 2011, 32, 2564–2571. [Google Scholar] [CrossRef]
  32. Zhang, J.; Shan, H.; Cao, X.; Yan, P.; Li, X. Pylon line spatial correlation assisted transmission line detection. IEEE Trans. Aerosp. Electron. Syst. 2014, 50, 2890–2905. [Google Scholar] [CrossRef]
  33. Golightly, I.; Jones, D. Corner detection and matching for visual tracking during power line inspection. Image Vis. Comput. 2003, 21, 827–840. [Google Scholar] [CrossRef]
  34. Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 834–848. [Google Scholar] [CrossRef]
  35. Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef]
  36. Nguyen, V.N.; Jenssen, R.; Roverso, D. LS-Net: Fast single-shot line-segment detector. Mach. Vis. Appl. 2020, 32, 12. [Google Scholar] [CrossRef]
  37. Pan, C.; Cao, X.; Wu, D. Power line detection via background noise removal. In Proceedings of the 2016 IEEE Global Conference on Signal and Information Processing (GlobalSIP), Washington, DC, USA, 7–9 December 2016; pp. 871–875. [Google Scholar]
  38. Benlıgıray, B.; Gerek, Ö.N. Visualization of power lines recognized in aerial images using deep learning. In Proceedings of the 2018 26th Signal Processing and Communications Applications Conference (SIU), Izmir, Turkey, 2–5 May 2018; pp. 1–4. [Google Scholar]
  39. Zhu, K.; Xu, C.; Wei, Y.; Cai, G. Fast-PLDN: Fast power line detection network. J. Real-Time Image Process. 2022, 19, 3–13. [Google Scholar] [CrossRef]
  40. Gubbi, J.; Varghese, A.; Balamuralidhar, P. A new deep learning architecture for detection of long linear infrastructure. In Proceedings of the 2017 Fifteenth IAPR International Conference on Machine Vision Applications (MVA), Nagoya, Japan, 8–12 May 2017; pp. 207–210. [Google Scholar]
  41. He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
  42. Yao, F.; Wang, S.; Li, R.; Chen, L.; Gao, F.; Dong, J. An accurate box localization method based on rotated-RPN with weighted edge attention for bin picking. Neurocomputing 2022, 482, 264–277. [Google Scholar] [CrossRef]
  43. Zhu, L.; Xie, Z.; Liu, L.; Tao, B.; Tao, W. IoU-uniform R-CNN: Breaking through the limitations of RPN. Pattern Recognit. 2021, 112, 107816. [Google Scholar] [CrossRef]
  44. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef]
  45. Gong, Y.; Yu, X.; Ding, Y.; Peng, X.; Zhao, J.; Han, Z. Effective Fusion Factor in FPN for Tiny Object Detection. In Proceedings of the 2021 IEEE Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 3–8 January 2021; pp. 1159–1167. [Google Scholar]
  46. Yang, C.; Wu, Z.; Zhou, B.; Lin, S. Instance Localization for Self-supervised Detection Pretraining. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 3986–3995. [Google Scholar]
  47. Krueangsai, A.; Supratid, S. Effects of Shortcut-Level Amount in Lightweight ResNet of ResNet on Object Recognition with Distinct Number of Categories. In Proceedings of the 2022 International Electrical Engineering Congress (iEECON), Pattaya, Thailand, 9–11 March 2022; pp. 1–4. [Google Scholar]
  48. Showkat, S.; Qureshi, S. Efficacy of Transfer Learning-based ResNet models in Chest X-ray image classification for detecting COVID-19 Pneumonia. Chemom. Intell. Lab. Syst. 2022, 224, 104534. [Google Scholar] [CrossRef]
  49. Sun, T.; Ding, S.; Guo, L. Low-degree term first in ResNet, its variants and the whole neural network family. Neural Netw. 2022, 148, 155–165. [Google Scholar] [CrossRef]
  50. Zhang, Z. ResNet-Based Model for Autonomous Vehicles Trajectory Prediction. In Proceedings of the 2021 IEEE International Conference on Consumer Electronics and Computer Engineering (ICCECE), Guangzhou, China, 15–17 January 2021; pp. 565–568. [Google Scholar]
  51. Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature Pyramid Networks for Object Detection. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 936–944. [Google Scholar]
  52. Yang, G.; Wang, Z.; Zhuang, S. PFF-FPN: A Parallel Feature Fusion Module Based on FPN in Pedestrian Detection. In Proceedings of the 2021 International Conference on Computer Engineering and Artificial Intelligence (ICCEAI), Shanghai, China, 27–29 August 2021; pp. 377–381. [Google Scholar]
  53. Liu, D.; Cheng, F. SRM-FPN: A Small Target Detection Method Based on FPN Optimized Feature. In Proceedings of the 2021 18th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP), Chengdu, China, 17–19 December 2021; pp. 506–509. [Google Scholar]
  54. Ahmad, R.; Naz, S.; Razzak, I. Efficient skew detection and correction in scanned document images through clustering of probabilistic hough transforms. Pattern Recognit. Lett. 2021, 152, 93–99. [Google Scholar] [CrossRef]
  55. Sun, F.; Liu, J. Fast Hough transform algorithm. J. Comput. Sci. 2001, 24, 1102–1109. [Google Scholar]
  56. Bober, M.; Kittler, J. A Hough transform based hierarchical algorithm for motion segmentation and estimation. In Proceedings of the IEEE Colloquium on Hough Transforms, London, UK, 7 May 1993; pp. 12/11–12/14. [Google Scholar]
  57. Liang, X.; Liu, L.; Luo, M.; Yan, Z.; Xin, Y. Robust infrared small target detection using Hough line suppression and rank-hierarchy in complex backgrounds. Infrared Phys. Technol. 2022, 120, 103893. [Google Scholar] [CrossRef]
  58. Kimori, Y. A morphological image processing method to improve the visibility of pulmonary nodules on chest radiographic images. Biomed. Signal Process. Control. 2020, 57, 101744. [Google Scholar] [CrossRef]
  59. Naderi, H.; Fathianpour, N.; Tabaei, M. MORPHSIM: A new multiple-point pattern-based unconditional simulation algorithm using morphological image processing tools. J. Pet. Sci. Eng. 2019, 173, 1417–1437. [Google Scholar] [CrossRef]
  60. Hu, L.; Qi, C.; Wang, Q. Spectral-Spatial Hyperspectral Image Classification Based on Mathematical Morphology Post-Processing. Procedia Comput. Sci. 2018, 129, 93–97. [Google Scholar] [CrossRef]
  61. Yan, J.; Liang, Q.; Li, Z.; Geng, B.; Kou, X.; Hu, Y. Application of Connected Domain Identification Method for Quantitative Cave Information Pickup in FMI Images. J. Geophys. 2016, 59, 4759–4770. [Google Scholar]
  62. Zhao, Z.; Zhang, T.; Zhang, Z. A new algorithm for threshold segmentation based on visual model and connected domain statistics. J. Electron. 2005, 22, 793–797. [Google Scholar]
  63. Yeung, M.; Sala, E.; Schönlieb, C.-B.; Rundo, L. Unified Focal loss: Generalising Dice and cross entropy-based losses to handle class imbalanced medical image segmentation. Comput. Med. Imaging Graph. 2022, 95, 102026. [Google Scholar] [CrossRef] [PubMed]
  64. Gioi, R.G.v.; Jakubowicz, J.; Morel, J.M.; Randall, G. LSD: A Fast Line Segment Detector with a False Detection Control. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 722–732. [Google Scholar] [CrossRef] [PubMed]
  65. Shi, P.; Fang, Y.; Lin, C.; Liu, Y.; Zhai, R. A new line detection algorithm—Automatic measurement of character parameter of rapeseed plant by LSD. In Proceedings of the 2015 Fourth International Conference on Agro-Geoinformatics (Agro-Geoinformatics), Istanbul, Turkey, 20–24 July 2015; pp. 257–262. [Google Scholar]
  66. Li, M.; Yang, Z.; Zhao, B.; Ma, X.; Han, J. Research on transmission conductor extraction method based on mainline projection LSD algorithm. In Proceedings of the 2022 2nd International Conference on Consumer Electronics and Computer Engineering (ICCECE), Guangzhou, China, 14–16 January 2022; pp. 747–750. [Google Scholar]
  67. Bolya, D.; Zhou, C.; Xiao, F.; Lee, Y.J. YOLACT++ Better Real-Time Instance Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 1108–1121. [Google Scholar] [CrossRef]
  68. Abdelfattah, R.; Wang, X.; Wang, S. TTPLA: An Aerial-Image Dataset for Detection and Segmentation of Transmission Towers and Power Lines. In Proceedings of the Computer Vision—ACCV 2020, Cham, Switzerland, 20 October 2020; pp. 601–618. [Google Scholar]
  69. Bolya, D.; Zhou, C.; Xiao, F.; Lee, Y.J. YOLACT: Real-Time Instance Segmentation. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea, 27 October–2 November 2019; pp. 9156–9165. [Google Scholar]
  70. Vemula, S.; Frye, M. Mask R-CNN Powerline Detector: A Deep Learning approach with applications to a UAV. In Proceedings of the 2020 AIAA/IEEE 39th Digital Avionics Systems Conference (DASC), San Antonio, TX, USA, 11–15 October 2020; pp. 1–6. [Google Scholar]
  71. Addai, P.; Mohd, T.K. Power and Telecommunication Lines Detection and Avoidance for Drones. In Proceedings of the 2022 IEEE World AI IoT Congress (AIIoT), Seattle, DC, USA, 6–9 June 2022; pp. 118–123. [Google Scholar]
Figure 1. Overview of survey area.
Figure 1. Overview of survey area.
Sensors 22 06431 g001
Figure 2. Data set making. (a) is an aerial drone image of the power lines; (b) is a partial image of (a) with labels added to the power lines in the image.
Figure 2. Data set making. (a) is an aerial drone image of the power lines; (b) is a partial image of (a) with labels added to the power lines in the image.
Sensors 22 06431 g002
Figure 3. Automatic extraction technology of power lines from UAV aerial images.
Figure 3. Automatic extraction technology of power lines from UAV aerial images.
Sensors 22 06431 g003
Figure 4. Mask RCNN network main structure diagram.
Figure 4. Mask RCNN network main structure diagram.
Sensors 22 06431 g004
Figure 5. Positive sample misclassification cases. A is the truth box, B is the prediction box.
Figure 5. Positive sample misclassification cases. A is the truth box, B is the prediction box.
Sensors 22 06431 g005
Figure 6. Connected domain group fitting algorithm.
Figure 6. Connected domain group fitting algorithm.
Sensors 22 06431 g006
Figure 7. Attach different labels to each connected domain. (a) Before adding tags; (b) after adding tags.
Figure 7. Attach different labels to each connected domain. (a) Before adding tags; (b) after adding tags.
Sensors 22 06431 g007
Figure 8. Partial image of the test dataset.
Figure 8. Partial image of the test dataset.
Sensors 22 06431 g008aSensors 22 06431 g008b
Figure 9. The extraction results of various methods on the test dataset. The extraction results are shown for five scenes, namely (ae). A total of six images are shown for each scene, which are the input image (Data), the truth image (Groundtruth), and the extraction results of four methods (LSD, Yolact++, Mask RCNN, and Ours).
Figure 9. The extraction results of various methods on the test dataset. The extraction results are shown for five scenes, namely (ae). A total of six images are shown for each scene, which are the input image (Data), the truth image (Groundtruth), and the extraction results of four methods (LSD, Yolact++, Mask RCNN, and Ours).
Sensors 22 06431 g009aSensors 22 06431 g009b
Figure 10. The trend of each performance parameter of Mask RCNN in the test dataset with different chunk sizes. (ae) is the trend plots of the performance parameters (DSCPL, TPR, FDR, Precision, and Accuracy) at each size over 60 images, respectively, and (f) is the trend plot of the mean values of these five parameters at different sizes.
Figure 10. The trend of each performance parameter of Mask RCNN in the test dataset with different chunk sizes. (ae) is the trend plots of the performance parameters (DSCPL, TPR, FDR, Precision, and Accuracy) at each size over 60 images, respectively, and (f) is the trend plot of the mean values of these five parameters at different sizes.
Sensors 22 06431 g010aSensors 22 06431 g010b
Figure 11. Extraction results of Mask RCNN at different sizes. There are (a,b) two groups of extraction results with different image backgrounds, each group has six images, which are the input image (Data), the truth image (Groundtruth), and the extraction results in four sizes (480 × 270, 960 × 540, 1920 × 1080, and 3840 × 2160).
Figure 11. Extraction results of Mask RCNN at different sizes. There are (a,b) two groups of extraction results with different image backgrounds, each group has six images, which are the input image (Data), the truth image (Groundtruth), and the extraction results in four sizes (480 × 270, 960 × 540, 1920 × 1080, and 3840 × 2160).
Sensors 22 06431 g011aSensors 22 06431 g011b
Figure 12. Comparison of two methods for solving the power line breakage problem. (a) Mask RCNN; (b) Mask RCNN + CDGFA.
Figure 12. Comparison of two methods for solving the power line breakage problem. (a) Mask RCNN; (b) Mask RCNN + CDGFA.
Sensors 22 06431 g012
Figure 13. Comparison of two methods to solve the error detection problem. (a) Mask RCNN; (b) Mask RCNN + CDGFA.
Figure 13. Comparison of two methods to solve the error detection problem. (a) Mask RCNN; (b) Mask RCNN + CDGFA.
Sensors 22 06431 g013
Table 1. UAV parameters.
Table 1. UAV parameters.
Camera GNSSParameters
Image sensorinch CMOS; 20.0 million effective pixels (204.8 million total pixels)
Video resolutionH.264, 4 K: 3840 × 2160 30 p
Maximum photo resolution4864 × 3648 (4:3)
Frequency of useGPS: L1/L2
GLONASS: L1/L2
BeiDou: B1/B2
Galileo: E1/E5
Positioning accuracyVertical 1.5 cm + 1 ppm (RMS)
Horizontal 1 cm + 1 ppm (RMS)
Table 2. Model training parameters.
Table 2. Model training parameters.
ParameterValue
weight decay0.0001
learning rate0.001
maximum iteration36,000
ims_per_batch4
batch_size_per_image128
Table 3. Comparison of the performance of each of the four methods on the test dataset.
Table 3. Comparison of the performance of each of the four methods on the test dataset.
MethodDSCPL (%)TPR (%)FDR (%)Precision (%)Accuracy (%)
LSD49.7452.9049.2650.7498.20
Yolact++48.5682.1064.7935.2198.66
Mask RCNN47.1657.0555.2843.0596.81
Ours73.9581.7530.7269.2899.15
Table 4. Based on 60 images in the test dataset, the average of each performance parameter of Mask RCNN at different chunk sizes.
Table 4. Based on 60 images in the test dataset, the average of each performance parameter of Mask RCNN at different chunk sizes.
Chunk SizeDSCPL (%)TPR (%)FDR (%)Precision (%)Accuracy (%)
480 × 27052.6385.1961.3838.6298.74
960 × 54063.2686.3149.0550.9598.94
1920 × 108061.9281.5648.3751.6398.88
3840 × 216047.1657.0555.2843.0596.81
Table 5. The average of the performance parameters of the Mask RCNN and Mask RCNN + CDGFA extraction results when the block size of the test dataset is 960 × 540.
Table 5. The average of the performance parameters of the Mask RCNN and Mask RCNN + CDGFA extraction results when the block size of the test dataset is 960 × 540.
MethodDSCPL (%)TPR (%)FDR (%)Precision (%)Accuracy (%)
Mask RCNN63.2686.3149.0550.9598.94
Mask RCNN + CDGFA73.9581.7530.7269.2899.15
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Song, J.; Qian, J.; Li, Y.; Liu, Z.; Chen, Y.; Chen, J. Automatic Extraction of Power Lines from Aerial Images of Unmanned Aerial Vehicles. Sensors 2022, 22, 6431. https://doi.org/10.3390/s22176431

AMA Style

Song J, Qian J, Li Y, Liu Z, Chen Y, Chen J. Automatic Extraction of Power Lines from Aerial Images of Unmanned Aerial Vehicles. Sensors. 2022; 22(17):6431. https://doi.org/10.3390/s22176431

Chicago/Turabian Style

Song, Jiang, Jianguo Qian, Yongrong Li, Zhengjun Liu, Yiming Chen, and Jianchang Chen. 2022. "Automatic Extraction of Power Lines from Aerial Images of Unmanned Aerial Vehicles" Sensors 22, no. 17: 6431. https://doi.org/10.3390/s22176431

APA Style

Song, J., Qian, J., Li, Y., Liu, Z., Chen, Y., & Chen, J. (2022). Automatic Extraction of Power Lines from Aerial Images of Unmanned Aerial Vehicles. Sensors, 22(17), 6431. https://doi.org/10.3390/s22176431

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop