1. Introduction
Visual ground aids are used to guide and control aircraft movements. International Civil Aviation Organization (ICAO) documents define the parameters and the layout of indicators, markings, lights, signs, and markers at the airport [
1,
2]. The correct functioning of visual aids greatly contributes to the safety of airport operations. The loss or malfunctioning of lights may mislead pilots during touchdown and cause a departure from the correct landing path. Maintaining the technical condition of these devices is one of the most important tasks of airport support services. The maintenance actions are separated into ground control carried out by technical services and air control carried out using flight inspection systems.
The most demanding task is the inspection of PAPI (precision approach path indicator) lights. It is crucial to provide the pilot information about the descent path during landing at the airport. The descent slope angle safeguarding a correct landing is signalled within 1/2 degree of the nominal usually equal to 3°. The procedure must enable a reliable determination of the aiming angle of the lights with an error smaller than ±0.085° [
2]. Important is the observation of the light colour transition zone, which signifies the technical condition of the lights. A corrupted transition zone indicates problems, e.g., incorrectly installed baffles in the light unit.
The inspection of PAPI lights requires the determination of positions of observation points and calculations of view angles related to these points. The mapping of the positions is a demanding measuring operation involving the use of survey equipment or GNSS-based positioning. Basic GNSS data are inadequate for accurately assessing the positions, and augmentation is applied to enhance the accuracy [
3]. Real-Time Kinematics (RTK) is a widely used technique. It is based on measurements of the phase of the signal’s carrier wave in addition to the information content of the signal. It requires a reference station or interpolated virtual station to provide real-time corrections. As a result, up to centimetre-level accuracy is obtained [
4]. The use of reference stations complicates the procedure for calculating the positions and sets higher processing demands for satisfying short response times, which ensure efficient inspections.
The hardware setup includes a GNSS receiver that supports RTK functionality and a high-quality antenna to capture satellite signals with minimal interference and multipath errors.
RTK is based on measuring distances to the satellites with carrier phase. Time is needed to lock to satellites and resolve ambiguities with a set confidence level. In the case of a 99.9% confidence level, the time required can surpass 100 s. The multipath reception of the satellite signals is the main source of error in determining carrier phase; additionally, interference from other communication systems and high levels of ionospheric activities need to be accounted for. Reference stations are used to assess the parameters of the signal paths and provide these as correction data for calculating positions. After locking the receiver, these correction data can be used to improve the calculations of positions, but any disruption of reception requires a re-lock.
There are companies maintaining subscription services based on networks of reference stations providing real-time correction data, which are transmitted using the radio or Internet. The subscription fees can significantly influence the cost of operation of a measuring team.
A solution set of linear equations comprising satellite data and correction coefficients is needed to obtain the position coordinates. The number of equations is related to the number of observed satellites and may reach a few hundred when a dozen or so satellites are observed [
5].
Certified airport light inspections are carried out with specially equipped aircrafts that employ augmented GNSS-based systems for surveying and assessing light conditions. RTK is the first choice for GNSS augmentation. Precise point positioning (PPP)-based augmentation may gain popularity in the near future as the European Galileo GNSS proposed in 2023 introduced satellite-based High-Accuracy Service (HAS) supporting PPP [
6]. Both services provide adequate accuracy for measuring light properties. Human observers note the light properties during inspection flights. The measurement procedures comply with ICAO regulations.
Inspections carried out using aircrafts are subjected to the same regulations as airline operations, and in the case of congested airports, they may be a large disruption for air traffic. The introduction of UAV-based inspections, which also are subjected to such regulations, is much less disruptive as the inspection periods are significantly shorter. UAV-based inspection systems, available on the market, rely on augmented GNSS systems for determining the positions in space of the UAVs during missions.
The inspection of approach light systems, runway lights, taxiway lights, and airport beacons includes the detection of lights that have lost more than 50% of their intensity and the identification of their positions in compliance with the airports geodesic documentation. The resultant reports are used by maintenance crews to optimize their work, especially at vast airports where access to lights requires covering long distances [
7].
Finding the position and assessing the state of the airport light is the core of light inspection tasks. This paper presents an image processing-based approach for performing the inspection task. PAPI light inspection is detailed for proving the effectiveness of the approach.
The study problem is defined as follows: Is the photogrammetric range imaging technique structure from motion (SfM), suitable for determining light positions? The technique is usually employed in tracking tasks for controlling the movement of autonomous vehicles. In this case, tracking may also be necessary as the UAV camera has limited resolution, so the observed area during the flight encompasses only a small part of the airfield. The following hypothesis is formulated: The sequence of images of lights on the airfield recorded during a UAV flight and the geodesic documentation of the light positions are sufficient for determining light positions.
This paper is structured as follows.
Section 2 reviews related works in the domain of UAV-based inspections and image=based support.
Section 3 presents the idea of the SfM application for airport light inspection with a discussion of the properties of the processing algorithm’s steps.
Section 4 details the results of measuring the angles of colour changes in PAPI lights using the proposed SFM-based procedure. Finally,
Section 5 discusses the results and sets goals for future studies.
2. Related Works
The inspection of visual aids in the case of lights requires measurements of the parameters of illumination in the space of the airport. Flight inspection systems are dominating these assignments. UNIFIS 3000-G2: Misson System developed by Norwegian Special Mission AS (NSM) represents an example of a modular system that can be installed on different aircraft: Beechcraft, Bombardier, Cessna, Embraer, Piaggio. The system includes NAVAID’s flight inspection tools configured by the user for desired inspections tasks [
8]. The Polish Air Navigation Services Agency (PANSA) caries out inspections of air traffic aids and the validation of instrument flight procedures in Poland. The agency carries out periodic, ad hoc, and category inspections and measurements of airport infrastructure: NAVAID’s navigation light systems, approach light systems (ALSs), runway light systems, and PAPIs [
9].
Aerodata developed the AeroFIS flight inspection system with in-house aircraft integration capability. AeroFIS can be coupled to the flight director/autopilot of an aircraft for automated flight inspection procedures; this reduces the flight time for completing inspections [
10]. Radiola’s aerospace flight inspection systems have the capability to down-link real-time data. A portable computer serves as the data recording system providing fast data acquisition and analysis. The solution is advantageous when complex commissioning tasks are performed [
11]. Modular and portable CARNAC Automatic Flight Inspection Systems can be installed temporarily on aircrafts that can carry out other missions: medical evacuation, training, television, environmental monitoring, etc. The systems are fully automatic [
12].
The solutions use multi-constellation multi-frequency GNSS position reference systems with the capability of utilizing satellite-based correction data services for determining the geodetic positions of aircrafts during flight inspection missions.
UAVs have recently begun to establish a position as a viable alternative to manned flight inspection systems. The authors in [
13] reviewed the potential of UAVs and concluded that developments in this area can effectively supplement or replace conventional inspections. UAVs can also improve the accuracy and consistency of inspections. They can capture high-resolution images of aviation infrastructure, which can be analyzed to detect defects and anomalies. Analysis can be performed using state-of-the-art image processing methods or using new algorithms based on emerging AI approaches. UAVs can provide access to hard-to-reach areas of aviation infrastructure. The introduction of these systems finds regulatory and operational challenges that limit the full automation of the inspection process [
14].
Inspection services, using UAVs, for airport lights are provided by a number of companies and successfully compete with traditional flight inspection systems. Canard Drones developed a solution for PAPI calibration that uses differential GNSS positioning. The operation is performed in a few minutes and can be carried out during day- or nighttime [
15]. Drone Flight Inspection (DeFI) outputs a graph of the colour transition of PAPI units, obtaining insight into the functional condition of the units [
16]. Colibrex prepared a versatile UAV platform with exchangeable payloads for ILS, VOR, and PAPI [
17].
The authors in [
18] discussed the problems of preparing software for processing video data collected during UAV flights for determining PAPI light parameters. The principle of operation of these systems is based on registering the images of lights during UAV flights and processing the collected data. The extraction of lights—image objects—must be immune to ambient light changes and explicitly localize the lights on the image. In order to evaluate the space parameters of light illumination, such as the direction of illumination and angles of visibility, precise knowledge of the camera position during the recording is required. Determining light intensity and colour creates the problem of appropriate colour mapping.
The common method of determining the camera position is to incorporate GNSS receivers into the UAV. The reading of the satellite data allows us to locate the UAV in world coordinates within a few metres. This is inadequate for the accurate calculation of light positions, so various augmentation methods are used, with the most popular being RTK [
19,
20]. RTK is a differential GNSS technique that uses corrections from a base station or reference network. It corrects for errors such as satellite orbit and clock errors, atmospheric delays, and multipath effects. RTK depends on the spatial correlation of these errors between the base station and the receiver. RTK provides high-precision results.
The evaluation of light properties may require special processing means as the measurements are performed in noisy environments such as surplus light sources. The authors in [
21] proposed to process images in the HSV colour space instead of the usual RGB and listed the advantages of such an approach. Image processing operations such as image edge detection, morphology operations, and the extraction of object features are performed more effectively in a complex airport background.
Structure from motion is a candidate for substituting GNSS-based positioning The authors in [
22,
23] reviewed the issues of the successful implementation of SfM. The range imaging technique consists of three main actions: the extraction of features in images and matching these features between images; camera motion estimation; and the recovery of the 3D structure using the estimated motion and features.
The results presented in [
24] indicate that the accuracy of positioning techniques for image acquisition during UAV missions based on RTK, post-processed kinematics (PPK), and precise point positioning–ambiguity resolution (PPP-AR) techniques have a sub-decimeter error value. PPP-AR appears to be an alternative to RTK and PPK, which usually require infrastructure, labour, and higher costs.
The fundamental principles and guidelines for developing processing algorithms for these actions are presented in [
25]. Based on a sequence of camera images, 3D object coordinates in an arbitrary coordinate system are calculated from the 2D image coordinates of image points. The objects are described using characteristic features tracked in the sequence of images. During this phase, camera parameters, describing the position (three shifts) and orientation (three rotations) at which the images were captured, are estimated. Adding the knowledge about the placement of objects in real-world space coordinates, for calibration, enables the calculation of a corresponding 3D point for almost each image pixel [
26,
27].
The solution of the problem of camera position estimation conditions the effectiveness of UAV use for light inspections. UAV flights are susceptible to weather conditions, especially blowing winds that cause the drone to waver. For the estimation of camera positions, a pair-wise direction estimation method immune to outliers in point correspondences between image pairs was proposed in [
28].
The process of recovering the scene and the associated camera poses from a set of images is an iterative nonlinear optimization task with respect to 3D feature coordinates and camera poses. The authors in [
29] proposed a pose-only imaging geometry framework and algorithms. The representation is a linear function of camera global translations that does not require nonlinear optimization and allows for efficient and robust camera motion estimation.
3. Materials and Methods
The identification of airport light properties specifically means matching the observed lights to documented lights on the runway and comparing the property data. The task poses the problem of assessing the light parameters and determining the light positions. The result of an inspection provides information for maintenance crews for correcting the operation of a light or carrying out a replacement. Employing unmanned aerial vehicles for inspections enables the efficient scanning of airfield runways without the excessive disruption of airport operations [
30]. UAV flights can be carried out at night when there is much less air traffic. Night flights create favourable conditions for assessing light luminosity as sunlight interference is eliminated.
The basic light parameters of colour and luminosity are the subjects of evaluation. Cameras with appropriate sensors for night vision can be used. The determination of light positions requires some means of assessing the space coordinates of the camera attached to the moving UAV. This operation is usually performed using GNSS-based methods with augmentation. Augmentation involves the use of additional sources of information such as reference stations and complicates the process of position calculations.
SfM enables the estimation of space coordinates from image sequences. When the camera moves, lights on the image change positions by different amounts depending on their distance from the camera. The size of the light position changes maps depth information, which is used to generate a 3D layout of the lights on the airport runway.
The correspondence between the images and the reconstructed 3D objects needs to be determined. This correspondence refers to the task of determining which parts of one image correspond to which parts of another image; in this case, the parts are lights described by features. To find correspondence between images, features are tracked from one image to the next.
Successful feature extraction methods such as the Scale-Invariant Feature Transform (SIFT) [
31] are applied for describing object properties. In the case of the observed airport lights, a much less computationally demanding approach is employed. UAV flights are carried out late in the day, and the ambient light is substantially less intense. Lights become the outstanding elements of the registered images, and their detection can be realized through elementary processing involving grey-scale morphological operations.
Estimating changes in the positions of lights in a sequence of images requires reliable light tracking. The first step is to extract and mark the lights on the image. Next, the marked lights are tracked in the following images. Depending on the image resolution, height of the flight, and speed of the UAV, the lights can move tens of pixels on the image. The careful setup of the filming conditions is necessary for reliable tracking. Large movements impair the calculation of correspondences. Small movements may not be detected as the UAV flight is limited in time due to battery capacity and the permissible time of shutting down the runway for air flights.
Important for the correct calculation of lights coordinates is finding the scale of projection and thus the relative positions of the UAV and observed airport lights. Airport geodesic data provide reference information about the positions of the lights, and this is used for scaling and calculating the space coordinates of the flying UAV for tracking.
Figure 1 presents an image sample recorded during a UAV inspection flight. Only the centre part of the image is processed, and detected lights are marked with blue rectangles. The size of the image segment used for processing depends on the height of the UAV flight, view angle of the camera mounted on the UAV, and on the camera sensor resolution. During the UAV flight, the lights at the bottom of the image move large distances, thus hindering reliable calculations of lights correspondences in consecutive images, whereas the top part of the image contains closely located lights that are difficult to correctly identify separately.
This study focused on measuring the parameters of PAPI lights in compliance with ICAO regulations [
1]. The UAV was positioned 300 m downwind of the PAPI system. Vertical scanning was performed, and the operator determined the heights of the lights of the upper and lower limits of the transition zone from red to white.
The idea of the processing algorithm,
Figure 2, is based on the tracking light positions during the UAV flight and collecting the set of parameters of the observed lights. The comparison of the content of this set with the airport documentation results in a report on the state of the airport lights.
The following guidelines were set for the design of the processing approach:
The video material ensures the size of an image pixel “ground sampling distance” (GDS) that enables explicit differentiation of lights;
The geodesic documentation of the airport is available for use in calculations;
And criteria for assessing the effectiveness were set as follows:
The error of estimation of the lights positions does not impair the ability to track lights;
The capability to determine the colour and angle of illumination is achieved;
The speed of processing enables the in-flight tracking of the lights.
The guidelines define the measurement field of the UAV and set the necessary camera parameter requirements for performing the inspection.
A category 3 inspection UAV was chosen for the missions with a 50 min flight capability. The UAV was equipped with a 20 Mpx zoom camera with focal length range of 6.83–119.94 mm. The camera can record video with resolutions of 3840 × 2160@30 fps and 1920 × 1080@30 fps and has a sensitivity range of 100–25,600 ISO.
3.1. Camera Calibration
The elevation setting angles of the PAPI lights fall in the range of +/−30′ of the nominal approach angle, which typically equals 3°. The UAV mission started d = 300 m from the lights, which gives a change in observation height ranging from
h1 to
h2:
Lights with a transition zone not greater than 3 min of arc in depth are functioning correctly; this indicates that the resolution of observations should be better in order to follow the transition course. A resolution of 1 min of arc means registering images every 8 cm of the UAV mission in ascent or descent. Video can be registered with a frame rate of 24 to 120 fps, resulting in maximal permissible vertical speeds of 1.9 to 9.6 m/s. Commercially available UAVs are capable of ascent speeds up to 8 m/s. The lower the ascent speed, the higher the resolution of registering observation angles. The time duration of the UAV mission, determined by the battery capacity, limits the workable resolution.
The required field of view (FOV) of the camera was determined using the arrangement of the PAPI and runway lights, which constitute the reference points for calculating the position of the camera. Preliminary tests showed that about 15 reference points were necessary to achieve an error rate that does not significantly distort the desired accuracy of determining the elevation setting angles of the PAPI lights. Standard cameras have fixed focal or zoom lens with FOVs in the range of 4°–84°. The observation of at least 15 reference points—runway lights—at height h1 required lens with an FOV larger than 58°. This gave a view of up to 10 rows of runway lights.
The processing started with the calibration of the camera mounted on the UAV. The goal was to determine camera parameters, especially in the case of cameras with zoom lenses. The zoom setup was carried out manually by the UAV mission operator. The intrinsic and extrinsic properties of the camera were evaluated; these enabled us to rectify images for the proper mapping of space relations. Instead of “chess board” calibration, a set of images of lights with known positions were collected, and the camera parameters were calculated. The pinhole calibration algorithm was used, which is based on the pinhole camera model [
32] and lens distortion model [
33]. Sufficiently accurate results were obtained when at least 20 lights were registered on a few tens of images taken during a short calibration flight. The setup of the camera cannot be changed during the UAV mission.
3.2. Extraction and Description of Observed Lights
The video stream from the UAV camera is cut into frames—images—constituting the input of the next processing task. The task executed by the function olamp is to extract lights and describe their parameters on the current image. If it is the first image, the light data base is initiated. It contains lights numbers, their colour, and pixel positions on the image. The task is divided into operations: light detection and light description.
Light detection in the image is based on morphological operations in grey scale. Morphology was chosen as previous works proved its high effectiveness in extracting objects [
34,
35]. The raw camera image is in the RGB colour space; it is converted to grey scale to speed up the processing. In order to eliminate noise, the image is “smoothed” using a Gauss filter. A rectangular filter is used to remove noise potentially linking neighbouring lights. The size of the filter kernel was selected by taking into account the horizontal pixel distances between the lights on the image. One tenth of the distance set as the width and one fifth set as the height of the kernel proved effective for eliminating noise and for preserving light edges. Next, dilation is applied, and the morphological gradient is calculated. The result is binarized using locally calculated thresholds and a set of objects—lights and contours—is obtained.
The set of contours is filtered; only objects larger than a prescribed size and positioned in the image segment significant for detection are extracted.
Each light is described using image pixel coordinates of the light centre, size, and colour. The size is expressed by the number of pixels in the light contour. The number of overexposed pixels is subtracted from the total number of light pixels limited by a rectangle circumscribed around the contour of the light as shown in
Figure 3. This operation prevents a gross distortion of the lights colour as the colour is calculated as the mean value of the light pixels. It is coded as ((256R + G)256) + B, where RGB are the red, green, and blue colour constituents.
3.3. Tracking of Airport Lights
The next step of executing the range imaging technique SfM is the task of estimating the 3D position of the camera—the tracking of the position. This task requires the evaluation of transform coefficients, which indicate the way the UAV changes its space position during the mission. The coefficients are obtained by solving the set of equations describing changes in light positions in the series of images registered during the mission.
Function
alamp executes the task of tracking lights and maintaining the light data base for evaluating the light conditions. The tracking starts with the operation of computing the first camera position. The camera position is calculated using geodesic data describing the positions of the detected lights and their image pixel coordinates [
36,
37]. The position is expressed using pixel coordinates or using a chosen coordinate system, e.g., EPSG: 217x, related to the geodesic documentation of airport lights. To facilitate the calculations, positions are expressed relative to a defined reference point of the airport.
The tracking task is reduced to matching pairs of light positions in consecutive video frames. The camera of the ascending UAV views the light from a different height; this causes a change in its position on the image. The change amounts to a few pixels and does not exceed the distance to another light. The search for a match is restricted to the neighbourhood that contains only the new position of the pair.
The positions of lights or camera are determined using Perspective-n-Point pose computation [
38]. Set of Equation (1) is solved to find the mapping of the set of airport lights in the airport space [
X, Y, Z] to the set of light positions on the camera image [
x, y]:
where
fx, fy, cx, cy—camera calibration parameters;
s—scale mapping of airport coordinates to image space;
rii—rotation matrix of the camera (converted to vector rvec);
ti—translation vector of the camera (tvec).
The number of equations is equal to the number of paired lights. At least 3 pairs are needed to calculate the camera position—rvec, tvec. Increasing the number of lights enhances the robustness of calculation but requires more computing power.
Once the camera position is determined, it is saved in the camera position table. The consecutive positions are used to calculate the light positions in the following images. The calculated light positions are paired with the geodesic documentation of the airport. Paired lights constitute the data base for evaluating the light conditions at the airport. Light descriptions are added to the data base until the light moves outside the view of the camera during the UAV flight. Changes in the parameters of the light during observation are collected. These changes are the result of changes in the angle of observation of the camera during the flight.
Figure 4 presents an example of detected and described lights. Blue rectangles indicate detected lights and white numbering confirms the identification. Violet rectangles contain objects outside the detection area. Blue crosses show the calculated positions of lights using the elaborated camera position. The crosses do not fall exactly at the centres of the lights but are sufficiently near to prevent the miss pairing of the lights.
Analysis of the course of the change in the light size related to the position in space of the UAV enables the assessment of the angle of illumination of the light. The maximum light size indicates the direction of illumination and can be compared with the airport documentation. Correct angles of illumination secure the safe operation of the airport runways.
During an observation, the colour of the light may change, indicating the operational properties of the lights. PAPI (precision approach path indicator) lights provide visual descent guidance and change colour when the angle of observation reaches prescribed values. PAPI lights help pilots maintain the correct glide slope during approach and landing. The lights are observed by pilots during the descent path. There are 4 light units with different light change angles. The colour of the light changes from red to white. The ratio of white to red lights, seen by the pilot, indicates whether the airplane is on the right glide slope. At the optimum approach angle, the ratio of white to red lights is equal.
3.4. View Angle Calculation
The course of light position changes paired with the camera positions enables the calculation of the view angle changes. The view angle is determined in the vertical plane αv in parallel with the axis of the runway and in the horizontal plane αh perpendicular to the runway axis.
Figure 5 illustrates the calculation of the view angles: (x
k, y
k, z
k) is the camera position, whereas (x
s, y
s, z
s) is the light position, all in airport coordinates.
The error of evaluating the view angle is related to the errors of determining the camera position and the light position. Light positions are found in the geodesic documentation of the airport and can be treated as reference positions. The error of the camera position is determined by the camera parameters and observation parameters. The resolution of the camera sensor and the focal length of the lens set the ability to distinguish points on the image.
The space mapping of the camera depends on the field of view (FOV) of the camera lens and on the size of the image sensor. If the image sensor is rectangular, the mapping will be different in the planes. The tests were carried out using a camera sensor with square pixels of a size of 1.41 µm. The sensor was rectangular: 7.53 × 5.64 mm. The camera used in the tests was equipped with a zoom lens with a focal length range of 6.83–119.94 mm. The shortest focal length was used during inspection flights.
The field of view of the camera was
where b—sensor width/height, and f—lens focal length.
In the case of this camera, the ranges of view angles were the following:
Horizontally (3.6–58°);
Vertically (2.7–44°).
The video stream from the camera can have a 4 k (3840 × 2160) or 2 k (1920 × 1080) resolution. To retain reliable light detection, the processing area of the video frames is reduced in the vertical plane:
Image pixels map approximately rectangular areas of the observed airport.
Table 1 sums up the mapping results for a UAV mission at a height of 10 m above ground.
The mapped width of the pixel does not surpass 1/3 of the mean diameter of lights, whereas the height can be 18 times larger. This means that the registered light intensity may be reduced due to unilluminated surrounding space, which is included in the pixel.
The error of evaluating the vertical view angle
dαV is
Errors dzk, dzs, and dyk are insignificant.
The rrror relative to the calculated angle is
where h—height of the UAV flight.
For UAV flights performed 10 m above ground, the error of evaluating the vertical view angle of the ALS and runway lights is in the range of 0.18–0.70% for a video resolution of 3840 × 2160 and increases twice for a resolution of 1920 × 1080. In the case of PAPI lights, the errors are almost 2 times smaller, fall in the range of 0.15–0.32% for a video resolution of 3840 × 2160, and similarly increase twice for a resolution of 1920 × 1080.
The error analysis proves that the proposed inspection procedure enables the evaluation of light illumination angles satisfying the requirements of PAPI calibration [
2].
4. Results
The task of measuring the angles of colour changes in PAPI lights was the focus of this study. The proposed processing procedure was implemented using C/C++ (v.20) and OpenCV (v.4.8.0) libraries. The UAV was equipped with an onboard processing unit aided by a 256 core CUDA-based module. The processing resources proved adequate for obtaining measuring results during the UAV flight. Tests with a much less powerful processing unit (Raspberry PI4), for use with other UAVs, also proved successful.
The measuring procedure complies with ICAO regulations Doc 9157 part 4. 8.3.43–8.3.47 [
2]. The course of light changes is recorded during UAV ascent at a distance of 300 m from the set of PAPI lights. Transition zones from red to white were determined, and the colour switch angles were calculated.
Figure 6 shows the sequence of colour changes during the ascent of the UAV. The change in colours happens gradually, and the exact transition moment during the flight is not easy to determine. The change in colours is additionally distorted by the overexposure at the centre of the light. This effect arises when the camera is used in low light. The effect of overexposure can be reduced by changing the sensitivity of the camera sensor. Lowering sensitivity reduces the size of the image of the light, which by consequence may lead to errors in determining its colour. High sensitivities build up noise corrupting the process of extracting contours.
The reliable automatic detection of colour change is hindered by colour noise when the camera operates in the RGB colour space. Numerous films recorded during the UAV ascent indicate that the colour transition zones are blurred, which gives ambiguous detections. The conversion of the image content to other colour spaces was investigated [
39]. The YUV colour space proved to be the most robust to noise. The chroma component V most distinctly differentiates white and red; it is used to detect the colour change.
The next step is to analyze the view of the lights at appropriate increments of the UAV ascent in order to obtain the required resolution of determining the transition angle. The increment is determined by the speed of the UAV and the filming speed of the camera. The camera works at 30 fps, and filming 300 m from the objects gives a view angle change of 1′ for every 8 cm of ascent. Keeping the ascend speed lower than 2.6 m/s results in an angle resolution smaller than 1′.
Crucial for the calculation of view angles is the precision of determining the position of the camera. Perspective-n-Point pose computation is sensitive to the number of points used in calculations. Tests showed that at least 15 reference points were required to achieve a camera position error enabling the calculation of angles with a smaller error than 5′.
Figure 7 presents an exemplary layout of the reference points. Lights 1–4 are the PAPI lights, whereas lights 5–15 are the neighbouring runway lights.
All flights were carried out with the camera horizontal FOV set to 58° and the lens focal length set to 6.83 mm in order to view at least 15 reference lights and film at 30 fps at a 3840 × 2160 resolution, which gives an error rate of calculating the transition angle in the range of 0.15–0.32%. Sensitivity is set automatically by the camera and can change in the range of 100–25,600 ISO; this enhances the exposure of the lights.
In order to reduce the impact of test flights on the operation of the airport, all tests were carried out late in the day with no daylight. This approach also reduces the effect of weather conditions such as sunny days and cloudy days. The lack of illumination imposes higher provisions on the sensitivity of the camera sensor but eases the light detection problem. Detection is not hindered by objects not being light sources. The problem of stray light sources remained, and the airport light detection area was carefully limited.
Flights 01–09 were carried out with different ascent speeds due to changing winds, and flights 17–19 had variable sensitivity camera settings. All flights were carried out within the flight capability of the used UAV. The UAV had a protection rating of IP55—it cannot fly in heavy rain, withstands winds up to 12 m/s, and operates in temperatures of −20–50 °C.
The ascent speed was kept lower than 2.6 m/s, and with changing wind, it varied, achieving values down to 1 m/s, resulting in a different video length being recorded during the inspection missions. This gives a varying number of frames for image processing, but the number, in all flights, satisfies the requirements for the correct calculation of the colour transition angles of the lights.
Camera sensitivity changes impact the range of values of the light colour components, but the behaviour of the changes is retained. The ambient light conditions changed from sunset to night as the flight missions started when there was no air traffic.
The PAPI lights were tracked during the ascent of the UAV. Camera positions relative to the PAPI positions give the changing view angle. Analysis of the speed of colour changes during the ascent pinpoints the transition angle. This happens when the speed reaches the highest value.
Figure 8 and
Figure 9 present examples of graphs of colour changes in pairs with calculated speed changes, that is, the first derivative of the colour change function.
The graphs show the results of test flights carried out under different conditions. Flight 01 shows a set of PAPI lights in good condition. The transitions are clear; only a small region of view angles of lights 3 and 4 indicates some fault in illumination.
The flight 18 results have much smaller values for the V component. The camera settings had a reduced sensitivity for recording images in order to diminish the influence of local illumination sources impeding filming. This may also simulate some unfavourable weather conditions such as clouds. Additionally, the graphs of changes in lights 3 and 4 have corrupted courses. In particular, light 4 is faulty in some way.
Table 2 presents the comparison of transition angles measured during UAV flights with airport-certified PAPI data. The certified data are the results of flight inspections with manned aircraft carried out at the airport by the Polish Air Navigation Services Agency (PANSA).
Observations during tests indicate that changing wind conditions causing the tilting of the UAV give rise to randomly changing errors. The errors of determining colour transition angles during flight speed changes are not related to a particular ascent speed. The measuring errors, especially during calm wind conditions, are stable. Missions were carried out using category 3 UAVs. These UAVs are sensitive to wind conditions due to their low weight of less than 25 kg.
Reduction in camera light sensitivity decreases the error of determining the transition angle. The optimization of the settings must take into account the ambient light conditions at the airport.
The mean error of determining the transition angles during the tests does not exceed 0.1 degrees in comparison with certified measurements—see
Table 3. Reference values for the airport differ slightly, and in this case, the discrepancy rises to 0.16 degrees. PAPI units 3 and 4 have distorted colour transitions, and this, in a way, introduces extra errors.
Extended statistical analysis is not justified due to the small number of tests carried out. The tests prove that changes in ascent speed and light conditions do not significantly change the error rates and all results fall within ICAO regulations. The weather and lightning conditions during the tests are common for airport maintenance actions.
No tests were carried out in adverse weather conditions as UAV flight regulations do not permit missions in such weather due to operator safety concerns. The introduction of the designed PAPI light colour transition angle measurement procedure for measurement practice will require a UAV mission consisting of several ascents in order to collect a statistically significant set of results. The error of evaluating the vertical view angle, colour transition angle Equation (2), using this procedure satisfies the requirements of the ICAO regulations.
The procedure can be applied to the inspection of ALS and runway lights; in this case, the UAV mission was a horizontal flight along the centre line of the runway starting a few hundred metres from the first crossbar. The light-tracking function is crucial for the inspection. Correct tracking requires a low stable flight speed, obtaining a large number of images for enhancing the robustness of light position calculations.
The goal of ALS and runway light inspection is to identify the positions of broken-down lights or lights with significantly lower illumination values. The accuracy of determining positions can be lower than in the case of PAPI inspection.
Figure 10 illustrates the results of determining light positions in two example missions, run in different weather conditions. The light line up deviates from the reference positions but keeps an unambiguous direction differing by a small angle from the reference line of positions.
Table 4 gives a summary of the detection results. The example missions, as in the case of PAPI, were carried out at night. All operating lights were successfully detected. The mission results include the registration of illumination changes as the observation angles of the light change during the UAV flight. Additional analysis of the light illumination changes during the observations can be performed to assess the angles of illumination.
5. Discussion
The working hypothesis—sequence of images of lights on airfield runways recorded during a UAV flight and the geodesic documentation of the light positions are sufficient for determining light positions—is proved. The photogrammetric range imaging technique is suitable for determining airport light positions. The results of the application of the technique in the demanding procedure of assessing the parameters of PAPI lights show that the positioning errors do not hinder the assessment. The errors of determining the colour transition angles of the PAPI units fall within the ranges set by aviation administration documents [
2].
The tests show that reliable measurements of PAPI light parameters require a set of measurements in order to alleviate errors due to changing UAV flight and camera parameters. UAVs are sensitive to wind conditions, which cause random deviations in the mission parameters. Mission planning plays an important role in collecting measurement data for obtaining accurate inspection results
6. Conclusions
The idea of UAV flights after daylight proves advantageous but a careful setup of the camera sensitivity is needed in order to retain the characteristic colours of lights. Background noise, that is, images of objects that do not make up the airport light system, is eliminated, significantly reducing the effort for extracting lights.
The proposed procedure can be applied for the inspection of ALS and runway lights. The light-tracking operation enables the effective inspection of large sets of lights. Important for proper calculations is the availability of airport geodesic documentation. The precision of this documentation conditions the operations of determining camera positions. Although the tracking accuracy may be worse than in the case of PAPI lights, positioning must enable lights placed apart to be distinguished similarly to PAPI lights.
UAV flight practice shows that the efficient execution of procedure steps requires experienced operators; a way to evade this demand is to automate the inspection. The optimization and automation of the inspection is an important goal for future studies.
7. Patents
System for evaluating the lighting parameters of airport lights using an unmanned aerial vehicle (UAV) patent pending P.440481 Patent Office of the Republic of Poland.