Skip to Content
SensorsSensors
  • Article
  • Open Access

13 October 2023

A UAV Intelligent System for Greek Power Lines Monitoring

,
,
,
,
,
and
1
School of Electrical and Computer Engineering (ECE), Technical University of Crete, 73100 Chania, Greece
2
GeoSense, 57013 Thessaloniki, Greece
3
Hellenic Electricity Distribution Network Operator S.A., 11743 Athens, Greece
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Advances on UAV-Based Sensing and Imaging

Abstract

Power line inspection is one important task performed by electricity distribution network operators worldwide. It is part of the equipment maintenance for such companies and forms a crucial procedure since it can provide diagnostics and prognostics about the condition of the power line network. Furthermore, it helps with effective decision making in the case of fault detection. Nowadays, the inspection of power lines is performed either using human operators that scan the network on foot and search for obvious faults, or using unmanned aerial vehicles (UAVs) and/or helicopters equipped with camera sensors capable of recording videos of the power line network equipment, which are then inspected by human operators offline. In this study, we propose an autonomous, intelligent inspection system for power lines, which is equipped with camera sensors operating in the visual (Red–Green–Blue (RGB) imaging) and infrared (thermal imaging) spectrums, capable of providing real-time alerts about the condition of power lines. The very first step in power line monitoring is identifying and segmenting them from the background, which constitutes the principal goal of the presented study. The identification of power lines is accomplished through an innovative hybrid approach that combines RGB and thermal data-processing methods under a custom-made drone platform, providing an automated tool for in situ analyses not only in offline mode. In this direction, the human operator role is limited to the flight-planning and control operations of the UAV. The benefits of using such an intelligent UAV system are many, mostly related to the timely and accurate detection of possible faults, along with the side benefits of personnel safety and reduced operational costs.

1. Introduction

One of the main maintenance tasks of electricity distribution network operators is power line inspection, since power transmission networks sustain a wide coverage area and complex terrain, while they are heavily exposed to harsh natural environments, with the hidden risks of defects and line failures threatening the safety and stable operation of the power grid. This task is a crucial step for the early detection of faults prior to damage in the network. Moreover, it is necessary in the case of network damage to find the accurate location of the fault and take appropriate restoration actions fast. Regular inspections and timely maintenance involve on-the-ground staff and low-flying Unmanned Aerial Vehicles (UAVs) and/or helicopters. In many cases, power line inspections still take place by personnel on foot, which is very time-consuming, complicated (due to the high volume of equipment required to be transferred), and prone to human error during the visual data collection. The drawbacks of these kinds of inspection procedures include human safety hazards due to challenging terrain and weather conditions, and delay in the detection of faults in the case of missing power lines, since the inspection of the network by human observers is too slow, etc. On the other hand, forward-thinking grid operators adopt manned helicopters equipped with high-resolution cameras for data collection, which proves to be expensive and difficult to scale up. An end-to-end system that combines UAVs, optical sensors, and automated image data analysis using machine learning methods can cover each step of the inspection process in an accurate and robust way and may provide real-time alerts to relevant stakeholders for possible faults along with their exact location. The potential benefits of adopting drone technology as an attractive alternative for power line inspection also include reduced work time and labor costs, access to hard-to-reach areas, availability for more frequent monitoring, an improved overall carbon footprint, a reduced complexity, an increased reliability, platform portability, adaptability, and expandability through the incorporation of different sensors and data sources, along with focus on different segments of the power grid to detect multiple types of defects [1].
According to the “Drones in Energy Industry Report 2022” [2], the commercial drone market will globally reach USD 41.3B by 2026, with UAVs in the energy industry making up the biggest percentage of the corresponding market (estimated as approximately USD 6 billion), revealing the potential and challenges of this edge technology with direct applications in the inspection of oil, gas, electricity systems, and other critical infrastructures. The industry-wide shift towards renewable energy along with the direct need to monitor extended frameworks to link solar and wind parks to power grids is another ongoing challenge and potential of drone technology.
The use of UAVs equipped with camera Red Green Blue (RGB) and thermal sensors may help in developing an effective fault detection procedure. The independent operation of a UAV on pre-defined routes and the ability to analyze, in real time, the thermal and RGB optical data of the power lines in situ are challenging, since only few similar attempts are available. Our work focuses on the methodologies for analyzing optical data collected by HEDNO S.A. (Hellenic Electricity Distribution Network Operator S.A.) in both the Athens and Chania areas in Greece. The terrain inspected is quite diverse, with non-uniform wild vegetation covering the power lines across the video recording. The outputs of the presented methodologies are the structure and exact location of power lines, with the execution speed being relatively high so as to enable real-time, in situ processing and inspection procedures.
The primary objective of our study is to accurately locate electrical power lines. In the current status, we identify the absence of a power line as a fault. Furthermore, we exploit the meta-level information on the existence of three consecutive lines as an indicator of the normal power line concatenation. At this point, we emphasize that other types of faults on the lines, such as the existence of foreign objects or irregular wire formations, result in excessive heat generation or irregular temperature profiles, so the detection of such faults become feasible from the additional consideration of data/images from the thermal camera.
Recent quality review works [3,4,5] indicate the limitations, challenges, advances, trends, and prospects of the application of UAVs in the electrical industry and monitoring applications in general. Based on the conclusions extracted from the targeted study of this referenced literature, the contribution and novelty of our work can be summarized in the following:
(a)
The study proposes a joint approach of algorithms processing RGB and thermal video sequences detecting the presence of power lines and their exact geo-locations. This enables human operators to carry out repairs and maintenance work in a more timely and efficient matter, while decreasing safety hazards.
(b)
An improved version of our previous work [6] is presented, incorporating optical information from the infrared spectrum (apart from the visible one) and additional training from other real-scenario datasets in order to remove artifacts and outliers from the output images, leading to an even more robust and accurate line detection methodology.
(c)
A carefully designed methodology is adopted for drone-based data capturing through vigilant flight planning and vehicle navigation, taking into consideration the power line network surroundings and geo-location mapping of the pylons for executing missions under pre-loaded routes in the ground station, which is extremely important in mountainous areas, where high elevation differences between lines and wind corridors can complicate flights.
(d)
A custom-made drone architecture is developed fusing different kind of sensors and microcomputer edge technology for advanced in situ and on-board data processing. The developed prototype is among the very limited devices to combine both visual light and infrared cameras under a robust quadrotor vehicle-type to operate with an increased payload relatively close to the power grid, even under moderate wind speed conditions.
(e)
An adaptive and functionality expandable power infrastructure-monitoring UAV-based prototype is created, since the same hardware setup can be utilized for identifying different electrical components of the power network under a modified algorithmic scheme (i.e., the training of the deep neural model with different data and the utilization of temperature profiles from the thermal camera). In addition, the development and setup of the UAV platform are based on open-source software.
(f)
Benchmark unbiased datasets based on real data under different terrain and environmental conditions are created in both the visual and infrared spectra, providing a unique collection of registered and fully synchronized imagery that can be used to train and test machine learning algorithms and further improve their accuracy and efficiency. Limited open-access datasets fusing thermal and visual data, such as in [7], suffer from a low resolution of images and asynchronous, non-registered image samples for each scene.
The remainder of the manuscript is ordered as follows. Section 2 provides a short presentation of the related work on similar applications using either RGB or thermal data. The specifics of the proposed work are detailed in Section 3, while our results from two different case studies are presented in Section 4. Finally, the conclusion of this study is presented in Section 5.

3. Proposed Methodology

In order to perform power line network inspection operations, several processes and tasks need to be completed. Our proposed framework combines artificial intelligence, different kinds of sensors, a custom-made UAV, and a data management platform to cover each step of the monitoring process, ensuring a harmonic and synchronized communication and information exchange through and between each stage.
To obtain the binary mask required for segmenting the power lines from their background, a combination of two distinct processing methods is employed. One method involves the processing of RGB data, while the other method focuses on the processing of thermal data. Initially, a binary mask is predicted for every power line image via a trained deep neural network. The thermal image is processed based on the Hough Transform for the detection of power lines, while the binary output obtained from this processing technique is employed as a complementary component to the RGB processing in order to improve the line detection accuracy. To achieve consistency of image information at the same time, both image registration and synchronized sensor triggering are required. The latter is performed through the drone navigation software of the ground control station, and the first is algorithmically achieved through image interpolation at a common image resolution (the “higher” one, that of the RGB camera) and image registration to ensure a commonly viewed image scene based on the a priori known sensor topology (distance of sensor centers in all three axes of the real-world space, flight height indicating distance from the ground) and characteristics (field-of-view and pixel size of each sensor). A more detailed description on the image registration procedure developed in the proposed framework is available in Appendix A.
The RGB binary output precisely delineates the lines of interest, but there are also artifacts present in the image, such as small regions or gaps between the lines. On the other hand, the segments extracted in the binary thermal image cover a wider area, resulting in thicker yet connected lines of an increased number compared to the actual power ones, under potentially various and multiple directions. This difference in thickness can be exploited to serve as a post-processing filtering mechanism for non-connected components and noise.
The fusion of RGB and thermal binary images is achieved by combining the binary masks resulting from the two procedures using a logical AND operator. Since only those areas where potential power lines are present in both the RGB and thermal images contribute to the final output, the pixel-wise logical AND operation helps in reducing artifacts in the background.
In addition, image morphology can be employed to deal with any slight gaps that might have occurred in the binary line structure during the entire segmentation process. More specifically, the two basic mathematical operations, opening and closing (under a five pixel-sized and square-shaped structuring element parameter setup), are utilized in sequential mode to first remove small noise fragments and finally bridge minor gaps in the structure, while maintaining the form and shape of the line.
The synthesis of RGB and thermal images, enhanced by the application of morphological operators, enables the effective and accurate identification of power lines, by combining the advantages of each segmentation method and imaging modality while overcoming their limitations. In Figure 1, a flowchart of the proposed method is presented.
Figure 1. Flowchart of proposed segmentation methodology.
The proposed algorithmic framework focuses on the detection of power lines and not of other types of electrical equipment, which is part of the future improvement of the presented study and will be attempted using other image segmentation approaches. Towards this direction, the flight plan along with the camera topology are specifically designed for capturing the image data of the proper setup to sustain specific requirements:
  • The drone is flying over the power network at a relatively close distance (~15–20 m above the ground), so that power lines are clearly visible and positioned as close as possible to the center of the captured image.
  • The drone speed is relatively slow to facilitate time-efficient video processing across the entire route without “empty” and “unprocessed” segments of the power network, while pillar Global Positioning System (GPS) coordinates are loaded in the flight mission plan to enable the smooth navigation of the vehicle.
  • The camera sensors are positioned vertically with respect to the ground under a gimbal topology.

3.1. RGB Data Processing

3.1.1. Architecture

The D-LinkNet Architecture is the most appealing method for the purpose of this work, since it demonstrates an excellent performance on image segmentation tasks, especially when it comes to linear structures. D-LinkNet consists of three main parts, an encoder, center part, and decoder, as can be seen in Figure 2. The major advantage of this network is the dilated convolutional layers in its center part. ResNet34 [31], which is pretrained on ImageNet [32], is used as the encoding part. The decoder part remains the same as in the LinkNet architecture [33]. The center part contains dilated convolution, both in cascade mode and parallel mode. The dilation rates of the dilated convolution layers are 1, 2, 4, 8, and 16.
Figure 2. D-LinkNet Architecture [34].

3.1.2. Datasets, Equipment, and Data-Capturing Framework

In preparation for an inspection and data-capturing flight, available Geographic Information System (GIS) data on pylons and terrain mapping are loaded into the UAV ground station to create an optimal drone route path over the area of interest at an approximate height of 5 m above the power transmission lines. In addition, the designed flight plan is submitted to the Civil Aviation Service for approval, keeping with all the safety rules and regulations.
The proposed custom-made UAV platform is based οn a quadcopter framework, capable of carrying increased loads and providing the necessary functionality for the needs of the proposed application, such as an increased autonomy, expandability, proper power supply for microeletronics, smooth navigation, and motor management under low speeds. In addition, it meets all the requirements for operating in accordance with the applicable regulatory framework and can be adjusted to a variety of big infrastructure inspection missions under varying weather conditions. The main processing core of the drone is an NVidia GPU-enabled microcomputer, providing an increased time efficiency towards the in situ data processing and proper handling of the attached sensors’ information. The flight mission is separately manipulated by the own micro-computer control unit of the aerial vehicle, which communicates with the ground control station. The optical system consists of a high-resolution RGB camera and a thermal one attached to the drone skeleton under a gimbal topology and fixed/known distances (this enables the registration of the captured imagery), assuring that both image sensors are always placed in parallel with each other and vertically with respect to the ground, independent of the drone flight angle and movement pattern. To ensure video synchronization, the two cameras are simultaneously triggered by the ground station software. The technical specifications of the a onboard processing units and camera sensors are summarized in Table 1. The key components of the UAV platform along with the connection topology are illustrated in Figure 3.
Table 1. Technical characteristics of proposed custom-made UAV-based power lines inspection platform sensor load.
Figure 3. Proposed UAV-based power line inspection platform and key components.
Our custom-made vehicle prototype is a long-range and long-endurance quadcopter. It uses four 6S 22,000 mAh lithium polymer semisolid-state batteries in a 2 in series first and then the two 12S sets are in parallel, giving a total of a 12S (50 V) and 44,000 mAh capacity. With the existing payload of the dual cameras, synchronized camera triggers, four in total processing units, video convertors, and a number of DC-to-DC Power Supply Units (PSUs) for all the processing units, the UAV-platform achieves an average consumption of 40–45 amp, which gives it a rough flight time of 50 min and even a little more under a coverage distance of nearly 10 km with a low speed of 2.5 m/s.
The Ground Control Station (GCS) uses 2.4 GHz Industrial, Scientific, and Medical (ISM) communication with the UAV, from which there is actual remote control of the vehicle, a Mavlink stream for telemetry data, and a live high-definition (HD) video stream. The image resolution is cable @720p @30 fps|1080p @30/60 fps and currently uses 1080 at 30 frames. In addition, the Handheld GCS Bluetooth/WIFI/GPS module is used to stream the Mavlink to Mission Planner 1.3.84, a Laptop GCS software, for the operators’ spotter to also receive detailed information on the UAV performance.
The UAV outputs pulse-width modulation (PWM) and Mavlink commands for camera control. Due to the fact that the FLIR thermal sensor is PWM-activated and the SONY visual light one is capable of being activated over its Multiport using Precision Time Protocol (PTP), we use a single PWM signal for both triggers and tune the PTP trigger in order for both cameras to start simultaneously. From all the testing so far, for the moment, a trigger command is given from the handheld GCS, and the average reaction time is estimated at 300 ms. It is noteworthy that a completely non-commercial custom cable is built, as is the corresponding firmware for the connectivity of the RGB image sensor to the main drone computer board, in order to achieve the dual-camera functionality and synchronization.
Το achieve good detection results in diverse terrain, we choose to train the D-LinkNet model on datasets that contain power lines in both mountain and urban scenes. The datasets, named “Power Line Dataset of Mountain Scene” (PLDM) and “Power Line Dataset of Urban Scene” (PLDU), were initially introduced in [10] and are available online for free. To further enhance the training process, additional data are used, resulting in a dataset of 771 training samples and 185 testing samples in total, all presenting a “healthy” power network of 3 lines. These data are provided by the HEDNO S.A. (Hellenic Electricity Distribution Network Operator S.A.) department that administers the network in the Chania area, Crete Island, Greece. The proposed algorithmic scheme focuses on the accurate segmentation of power lines, however, an easy-to-use fault detection rule is incorporated into the monitoring procedure to produce alarm messages when only two or one power lines are detected.
As an additional improvement of the training stage, data augmentation strategies are utilized to artificially expand the data availability. Data augmentation is performed dynamically, creating varied versions of each image in the dataset during each epoch of training. Specifically, during each epoch of the training process, every individual image is subjected to a series of augmentation techniques, resulting in an augmented dataset that is 771 times larger than the original dataset. The techniques used include random rotation, flipping, zoom, contrast, and brightness adjustment. Table 2 provides detailed information about both the training and testing datasets.
Table 2. Dataset details.
Enhancing the dataset with the use of a data augmentation technique substantially increases the diversity within the training data, which enables the model to identify lines in a broader range of conditions, significantly improving its performance on unseen data and mitigating the risk of overfitting.

3.1.3. Segmentation Process

We propose segmenting the power line images using a grid approach. The DLink-Net divides the input image of size into a grid and predicts an output mask for each grid cell separately. In the current study, we test a 4 × 5 grid approach. As the grid size decreases, the thickness of the detected power lines increases. Using a bigger grid, the detected line is thinner and more precise to its actual size. The grid size can be adjusted based on the distance between the UAV camera and the power lines that need to be identified.

3.1.4. Implementation

The network was implemented in Python using Pytorch, on an NVIDIA GeForce GTX 1650 Ti GPU. A learning rate of 0.001 was set, an optimal value for achieving steady training progress. To further fine-tune the model, the Adam optimizer was employed. Binary Cross-Entropy (BCE) loss function was used to gauge the error between the prediction output and the provided target value. The model was trained for six epochs, which was enough for it to accurately recognize the structure of the power lines. Furthermore, ResNet18 was adopted as the encoder in D-LinkNet.

3.2. Thermal Data Processing

An analysis of the thermal images for extracting power lines is performed, applying Probabilistic Hough Transform [35,36], a widely established yet effective and robust approach for detecting known shapes that can be represented through mathematical formulas. Compared to the initial Hough Transform, the Probabilistic one constitutes an improved and advanced version of it, capable of identifying both the start and end points of line segments and allowing for a more accurate detection of complex shapes in images and continuous line following, connecting gaps and holes between extracted segments. In our proposed implementation scheme, the Transform is applied to a thresholded output of the V component of the Hue Saturation Value (HSV)-transformed instance of a thermal drone-captured image sequence. The parameter setup of the algorithm is as follows: (a) distance resolution of the accumulator = 1 pixel, (b) accumulator threshold parameter = 50, meaning that only those lines that get enough votes are returned (larger than the threshold), (c) angle resolution of the accumulator = π/180 radians, (d) minimum line length = 200 pixels, meaning that line segments shorter than this value are rejected, and (e) maximum allowed gap between points on the same line to link them = 10 pixels. The resulting image of this algorithmic stage represents the ideal and long line segments present in the scene and serves as an indicative guide for smoothing the contours, connecting the gaps, and removing the outliers present in the RGB deep-learning-based extracted sample of the previous processing step.

4. Results

At first, the proposed method was evaluated on multiple frames extracted from the UAV-captured videos, provided to us by the HEDNO S.A. department. To measure the performance of the approach, 35 video frames were annotated using the LabelMe annotation tool [37], which is available online for free. All the annotated images were converted into binary ground truth masks. A sample frame of the videos acquired by HEDNO S.A. is shown in Figure 4, along with its corresponding generated ground truth mask.
Figure 4. HEDNO S.A. dataset sample, (a) video frame, and (b) corresponding binary mask.
Table 3 presents metrics, such as Accuracy, Precision, Recall, F1-Score, and Specificity, which are calculated using Equations (1)–(5), respectively, to demonstrate the effectiveness of the method after validation on four different real-scenario datasets containing 35 images under different terrain and lighting conditions.
Accuracy = (TN + TP)/(TN + FP + TP + FN)
Precision = TP/(TP + FP)
Recall = TP/(TP + FN)
F1-Score = 2 (Recall · Precision)/(Recall + Precision)
Specificity = TN/(TN + FP)
Table 3. Model performance metrics.
The accuracy measure evaluates the overall efficiency of the model’s predictions by considering both positive and negative samples. The F1-score is an index used to measure the predictive performance of a model. It combines precision and recall, which are two otherwise competing metrics. Specificity measures the model’s ability to correctly identify negative samples out of all the actual negative samples. It focuses on minimizing false positives. The TP, FP, and FN stand for true positive, false positive, and false negative, respectively. These values are calculated using confusion matrices.
The results in Table 3 show that the proposed method combining both RGB and thermal processing outperforms the single-modality RGB processing utilizing the trained D-LinkNet for binary mask generation. The higher accuracy indicates that the combined processing provides an higher overall correctness in its predictions. The lower precision of the sole D-LinkNet model suggests that it is more likely to produce false positives, which is indicative of artifacts or noise present in its predictions. The presence of artifacts and noise in the predicted binary masks is evident upon a visual inspection of Figure 5, which displays the generated outputs from both D-LinkNet and D-LinkNet combined with Thermal Processing. Examples of “clearly” detected power lines using our proposed methodology are depicted in Figure 6. Apart from detecting power lines, the overall implementation framework measures the number of parallel lines detected in the scene and, in the case of missing one(s), produces an alert/notification for fault presence due to cable damage. By matching the timestamp of each defect detection event through an online/offline video analysis with the log file of the flight (containing, among others, the GPS data recordings), the exact geolocation of the “faulty” power lines network segment is recovered, which is crucial for proper and immediate actions by stakeholders. It is important to mention that the average processing time for each frame under the programming code deployment on the GPU-enabled board and preliminary testing is about 2.3 s, revealing the potential of this proposed power line inspection methodology for in situ and/or on-board assessments.
Figure 5. Power line detection results on a sample video frame. From (af): (a) RGB Video Frame, (b) thermal video frame, (c) ground truth image (d) D-LinkNet output mask, (e) proposed method output mask, and (f) final segmented frame.
Figure 6. Examples of detected power lines using proposed methodology.

5. Conclusions

The combination of two different data-processing methods applied to the data obtained from UAVs appeared to improve fault detection results. Specifically, these two processing methods act in a complementary manner, where one method (RGB Processing) accurately identified the lines in detail, while the other method successfully eliminated incorrect artifacts of the processing. Through the proposed custom-made UAV platform and integrated optical data analysis framework, a robust, accurate, cost-effective (both in terms of intelligent drone platform development and service provision costs for end-users under a long-scale and long-term utilization basis), and adaptive tool for power lines inspection was developed and validated, revealing its potential for automated assessments in the field. Regarding the time consumption, preliminary tests on the execution time of the initial version of the python code developed for the proposed power-lines-monitoring framework revealed an average time of 100 s per frame on a CPU (Intel(R) Core(TM) i5-8300H CPU @ 2.30 GHz), 14 s per frame on a Field Programmable Gate Array (FPGA—KV260 Xilinx model) module, and 2.3 s per frame on a proposed GPU-enabled microcomputer system (Nvidia Jetson AGX Xavier, Nvidia, Santa Clara, CA, USA), revealing the potential of our system for in situ data analysis applications. Our study based on drone inspection and fused optical data analysis built and provided a dataset of historical, unbiased imagery records, facilitating the identification of critical areas and enabling a study on power grid status alterations over time and the quick dispatch of service teams upon fault event detection. The challenges of applying and adopting this technology include: (a) actions for involving licensed and trained personnel who officially know Federal Aviation Administration (FAA) drone regulations, (b) intensive electrical safety training when dealing with high-voltage electrical networks, (c) the proper calibration and usage of camera sensors to capture quality imagery data, and (d) adaptation to the new era coming soon when flight regulations are relaxed and allow drones to cover bigger areas and eventually operate largely autonomously, leading to Beyond Visual Line Of Sight (BVLOS) flights under limited human intervention and at low altitudes over pipelines and power lines. The future improvement of our work includes decreasing the processing time through parallelization and code optimization, thermal image processing based on temperature profiles extracted through properly selected raw image formats, and fault detection on additional components of power transmission networks other than power lines.

Author Contributions

Conceptualization, G.P., M.Z. and K.M.; data curation, V.P. and D.R, methodology, A.T., G.L., K.M. and M.Z.; software, A.T. and G.L.; validation, A.T., G.L., K.M. and G.P.; writing—original draft, A.T., G.L. and K.M.; supervision, K.M. and M.Z.; project administration, G.P. and M.Z.; resources, G.P., V.P. and D.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research has been supported by the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship and Innovation, under the call RESEARCH—CREATE—INNOVATE (project code: T2EDK-03595, AdVISEr).

Data Availability Statement

Data is unavailable due to privacy restrictions.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. RGB and Thermal Image Registration Procedure

Figure A1. Topology of RGB and thermal camera sensors.
Figure A1. Topology of RGB and thermal camera sensors.
Sensors 23 08441 g0a1
Figure A2. Generalized illustration of common field of view area between dual-camera sensors.
Figure A2. Generalized illustration of common field of view area between dual-camera sensors.
Sensors 23 08441 g0a2
Figure A3. Simplified illustration for estimating common field of view area between dual-camera sensors.
Figure A3. Simplified illustration for estimating common field of view area between dual-camera sensors.
Sensors 23 08441 g0a3

References

  1. Li, X.; Li, Z.; Wang, H.; Li, W. Unmanned Aerial Vehicle for Transmission Line Inspection: Status, Standardization, and Perspectives. Front. Energy Res. 2021, 9, 713634. [Google Scholar] [CrossRef]
  2. Drone Industry Insights. Available online: https://droneii.com/product/drones-in-energy-industry-report (accessed on 14 February 2022).
  3. Foudeh, H.A.; Luk, P.C.K.; Whidborne, J.F. An advanced unmanned aerial vehicle (UAV) approach via learning-based control for overhead power line monitoring: A comprehensive review. IEEE Access 2021, 9, 130410–130433. [Google Scholar] [CrossRef]
  4. Zhang, Z.; Zhu, L. A Review on Unmanned Aerial Vehicle Remote Sensing: Platforms, Sensors, Data Processing Methods, and Applications. Drones 2023, 7, 398. [Google Scholar] [CrossRef]
  5. Mohsan, S.A.H.; Othman, N.Q.H.; Li, Y.; Alsharif, M.H.; Khan, M.A. Unmanned aerial vehicles (UAVs): Practical aspects, applications, open challenges, security issues, and future trends. Intel. Serv. Robotics 2023, 16, 109–137. [Google Scholar] [CrossRef]
  6. Tsellou, A.; Moirogiorgou, K.; Plokamakis, G.; Livanos, G.; Kalaitzakis, K.; Zervakis, M. Aerial video inspection of Greek power lines structures using machine learning techniques. In Proceedings of the 2022 IEEE International Conference on Imaging Systems and Techniques (IST), Virtual, 21–23 June 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1–6. [Google Scholar]
  7. Yetgin, Ö.E.; GEREK, Ö.N. Powerline Image Dataset (Infrared-IR and Visible Light-VL), Version 8; Mendeley Data; Elsevier: Amsterdam, The Netherlands, 2019. [Google Scholar] [CrossRef]
  8. Solilo, M.; Doorsamy, W.; Paul, B.S. UAV Power Line Detection and Tracking using a Color Transformation. In Proceedings of the 2021 International Conference on Electrical, Computer and Energy Technologies (ICECET), Cape Town, South Africa, 9–10 December 2021; pp. 1–5. [Google Scholar] [CrossRef]
  9. Schofield, O.B.; Iversen, N.; Ebeid, E. Autonomous Power Line Detection and Tracking System Using UAVs. Microprocess. Microsyst. 2022, 94, 104609. [Google Scholar] [CrossRef]
  10. Guan, H.; Sun, X.; Su, Y.; Hu, T.; Wang, H.; Wang, H.; Peng, C.; Guo, Q. UAV-Lidar Aids Automatic Intelligent Powerline Inspection. Int. J. Electr. Power Energy Syst. 2021, 130, 106987. [Google Scholar] [CrossRef]
  11. Diniz, L.F.; Pinto, M.F.; Melo, A.G.; Honório, L.M. Visual-based Assistive Method for UAV Power Line Inspection and Landing. J. Intell. Robot. Syst. 2022, 106, 41. [Google Scholar] [CrossRef]
  12. Gubbi, J.; Varghese, A.; Balamuralidhar, P. A New Deep Learning Architecture for Detection of Long Linear Infrastructure. In Proceedings of the 2017 Fifteenth IAPR International Conference on Machine Vision Applications (MVA), Nagoya, Japan, 8–12 May 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 207–210. [Google Scholar]
  13. Zhang, H.; Yang, W.; Yu, H.; Zhang, H.; Xia, G.S. Detecting Power Lines in UAV Images with Convolutional Features and Structured Constraints. Remote Sens. 2019, 11, 1342. [Google Scholar] [CrossRef]
  14. Nguyen, V.N.; Jenssen, R.; Roverso, D. LS-Net: Fast Single-Shot Line-Segment Detector. Mach. Vis. Appl. 2021, 32, 12. [Google Scholar] [CrossRef]
  15. Yang, L.; Fan, J.; Xu, S.; Li, E.; Liu, Y. Vision-Based Power Line Segmentation with an Attention Fusion Network. IEEE Sens. J. 2022, 22, 8196–8205. [Google Scholar] [CrossRef]
  16. Han, G.; Zhang, M.; Li, Q.; Liu, X.; Li, T.; Zhao, L.; Liu, K.; Qin, L. A Lightweight Aerial Power Line Segmentation Algorithm Based on Attention Mechanism. Machines 2022, 10, 881. [Google Scholar] [CrossRef]
  17. Jaffari, R.; Hashmani, M.A.; Reyes-Aldasoro, C.C. A Novel Focal Phi Loss for Power Line Segmentation with Auxiliary Classifier U-Net. Sensors 2021, 21, 2803. [Google Scholar] [CrossRef]
  18. Gao, Z.; Yang, G.; Li, E.; Liang, Z.; Guo, R. Efficient Parallel Branch Network with Multi-Scale Feature Fusion for Real-Time Overhead Power Line Segmentation. IEEE Sens. J. 2021, 21, 12220–12227. [Google Scholar] [CrossRef]
  19. An, D.; Zhang, Q.; Chao, J.; Li, T.; Qiao, F.; Deng, Y.; Bian, Z.; Xu, J. DUFormer: A Novel Architecture for Power Line Segmentation of Aerial Images. arXiv 2023, arXiv:2304.05821. [Google Scholar]
  20. Lin, Y.; Zhang, W.; Zhang, H.; Bai, D.; Li, J.; Xu, R. An Intelligent Infrared Image Fault Diagnosis for Electrical Equipment. In Proceedings of the 2020 5th Asia Conference on Power and Electrical Engineering (ACPEE), Chengdu, China, 4–7 June 2020; pp. 1829–1833. [Google Scholar] [CrossRef]
  21. Shams, F.; Omar, M.; Usman, M.; Khan, S.; Larkin, S.; Raw, B. Thermal Imaging of Utility Power Lines: A Review. In Proceedings of the 2022 International Conference on Engineering and Emerging Technologies (ICEET), Kuala Lumpur, Malaysia, 27–28 October 2022; pp. 1–4. [Google Scholar] [CrossRef]
  22. Balakrishnan, G.K.; Yaw, C.T.; Koh, S.P.; Abedin, T.; Raj, A.A.; Tiong, S.K.; Chen, C.P. A Review of Infrared Thermography for Condition-Based Monitoring in Electrical Energy: Applications and Recommendations. Energies 2022, 15, 6000. [Google Scholar] [CrossRef]
  23. Zhang, Z.; Liang, Y.; Qian, J.; Jiang, K.; Sun, X.; Huang, L. Review of the Theory and Application of Infrared Thermography in Transmission Line Monitoring and Equipment Monitoring. In Proceedings of the 2023 Panda Forum on Power and Energy (PandaFPE), Chengdu, China, 27–30 April 2023; pp. 824–831. [Google Scholar] [CrossRef]
  24. Ullah, I.; Khan, R.U.; Yang, F.; Wuttisittikulkij, L. Deep Learning Image-Based Defect Detection in High Voltage Electrical Equipment. Energies 2020, 13, 392. [Google Scholar] [CrossRef]
  25. Kim, J.S.; Choi, K.N.; Kang, S.W. Infrared Thermal Image-Based Sustainable Fault Detection for Electrical Facilities. Sustainability 2021, 13, 557. [Google Scholar] [CrossRef]
  26. Mlakić, D.; Nikolovski, S.; Majdandžić, L. Deep Learning Method and Infrared Imaging as a Tool for Transformer Faults Detection. J. Electr. Eng. 2018, 6, 8. [Google Scholar] [CrossRef]
  27. Li, H. Thermal Fault Detection and Diagnosis of Electrical Equipment Based on the Infrared Image Segmentation Algorithm. Adv. Multimed. 2021, 2021, 9295771. [Google Scholar] [CrossRef]
  28. He, S.; Yang, D.; Li, W.; Xia, Y.; Tang, Y. Detection and Fault Diagnosis of Power Transmission Line in Infrared Image. In Proceedings of the 2015 IEEE International Conference on Cyber Technology in Automation, Control, and Intelligent Systems (CYBER), Shenyang, China, 8–12 June 2015; 2015; pp. 431–435. [Google Scholar] [CrossRef]
  29. Kalos, P.S.; Pantawane, P.D.; Rajiv, B.H. Improved Detection of Fault Diagnosis in High Voltage Transmission Lines Using Thermal Imaging Based Convolutional Neural Network Module. Solid State Technol. 2020, 63, 6. [Google Scholar]
  30. Jalil, B.; Pascali, M.A.; Leone, G.R.; Martinelli, M.; Moroni, D.; Salvetti, O.; Berton, A. Visible and Infrared Imaging Based Inspection of Power Installation. Pattern Recognit. Image Anal. 2019, 29, 35–41. [Google Scholar] [CrossRef]
  31. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  32. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: A Large-Scale Hierarchical Image Database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
  33. Chaurasia, A.; Culurciello, E. Linknet: Exploiting Encoder Representations for Efficient Semantic Segmentation. In Proceedings of the 2017 IEEE Visual Communications and Image Processing (VCIP), St. Petersburg, FL, USA, 10–13 December 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–4. [Google Scholar]
  34. Zhou, L.; Zhang, C.; Wu, M. D-LinkNet: LinkNet with Pretrained Encoder and Dilated Convolution for High-Resolution Satellite Imagery Road Extraction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–22 June 2018; pp. 182–186. [Google Scholar]
  35. Kälviäinen, H.; Hirvonen, P.; Xu, L.; Oja, E. Comparisons of Probabilistic and Non-Probabilistic Hough Transforms. In Computer Vision—ECCV ’94; Springer: Berlin/Heidelberg, Germany, 2006; pp. 350–360. [Google Scholar] [CrossRef]
  36. Li, H.; Ma, Y.; Bao, H.; Zhang, Y. Probabilistic Hough Transform for Rectifying Industrial Nameplate Images: A Novel Strategy for Improved Text Detection and Precision in Difficult Environments. Appl. Sci. 2023, 13, 4533. [Google Scholar] [CrossRef]
  37. Wada, K. Labelme: Image Polygonal Annotation with Python. Available online: https://github.com/zhong110020/labelme#labelme-image-polygonal-annotation-with-python (accessed on 1 September 2023).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.