1. Introduction
Due to its significance in both civil and military fields, the airport has gained increased attention in recent years [
1,
2] and methods for detecting and extracting airports have been widely developed, such as methods based on visual saliency [
3,
4,
5,
6,
7,
8] and methods based on deep learning [
2,
9,
10,
11]. Meanwhile, runways play a fundamental role among the facilities of an airport and are considered to be the most important feature in airport detection [
3,
12]. However, the vector or raster data of runways are difficult to obtain, because these data are managed mainly by government sectors or military institutions. In addition, the overall accuracy on runway detection and extraction were limited in existing research. Furthermore, with the construction, reconstruction, and expansion of airports around the world, major changes have taken place in airports, such as the construction and movement of runways, the construction of terminal buildings and aprons, and so on. Although these changes, especially semantic changes of runways, can support decision for relevant departments, the research on airport runway change analysis was rarely explored, because the accuracy of some existing change detection methods [
13,
14] was not very high and these methods mainly focused on changes of general objects, such as forest into farmland, rather than finer changes within a typical object.
On the one hand, the increasing availability of very high-resolution (VHR) optical satellite images provides an opportunity to characterize and identify location and spatial information of runways in an airport more clearly but brings in more complex background information, including taxiways and aprons, road, and farmland. Naturally, these background objects with similar characteristics to the runway bring great challenges to runway detection and extraction. This paper divides the research in runway detection and extraction into three categories: based on basic characteristics of runways such as line feature [
15], texture feature [
12,
16], etc.; based on airport knowledge [
17]; and based on computer vision and deep learning methods [
18]. Generally, the first type of method is the most commonly used method in runway detection and extraction, because these basic features are most prominent in high resolution images. However, this method may lead to a high false detection/extraction rate due to insufficient knowledge of airport. In recent years, deep learning methods have been widely applied in various fields, such as aircraft detection, runway extraction, and so on. Although the accuracy of runway detection/extraction is increased, further improvements of this method are usually limited by the lack of publicly enormous and various VHR runway datasets and time-consuming manual marking of runway data samples.
On the other hand, the method of change detection using remote sensing (RS) data exerts great influence in detecting changes on the Earth’s surface and has been applied in many fields [
19], especially the field of land use/land cover (LULC) change [
20,
21]. Furthermore, change detection has made great progress from using traditional methods, such as Markov Random Field (MRF) [
13], Conditional Random Field (CRF) [
22], Level Set [
23], Random Forest (RF) [
14,
24], and other methods [
25], to using deep learning methods [
26,
27]. However, detecting airport runway changes is not easy because: (1) there is a lack of public RS datasets of runway changes; (2) the complex background of airport runways, such as farmland, rivers, terminal buildings, aprons, etc., makes it difficult to identify runway changes from various changes; and (3) change detection methods may perform not well in the case where movements of airport runway ends change, but the location and orientation of runways are almost unchanged.
Therefore, to obtain accurate results of airport runway changes, the main idea and contribution of this paper can be summarized from three aspects: (1) building a systematic airport knowledge base to understand spatial definition, characteristics, and change types of runways; (2) proposing an accurate runway extraction method using multiple techniques and algorithms, such as saliency analysis, line segment detection, grayscale template matching of the chevron markings, runway edge line grouping, and so on; and (3) obtaining semantic information and vector results of runway changes by implementing runway change analysis according to runway extraction results of bi-temporal airport images. Eventually, six VHR airport images with different background objects and runway structures were selected to test the effectiveness and accuracy of runway extraction, and two sets of airport images taken at different times were used to validate the final accuracy of our method.
The rest of this paper is organized as follows. In
Section 2, we first introduce the airport knowledge base, then the proposed methods of runway extraction and runway change analysis are introduced in detail, and, finally, datasets and evaluation metrics are illustrated explicitly. The parameters used in experiments, experimental results and comparison with state-of-the-arts are presented in
Section 3. Key and difficult points about our method and some limitations are discussed in
Section 4. We conclude the paper with some important findings and our future work in
Section 5.
3. Results
3.1. Experimental Parameters
Due to the large size of airport images, the salient binary map of the airport was downscaled to about 1 m spatial resolution to improve the speed of subsequent runway extraction. To get more accurate results of runway extraction, the similarity threshold T for grayscale template matching of the chevron marking for runways in
Section 2.2.3 and length thresholds L1 and L2 for line segment detection based on Probabilistic Hough Transform in
Section 2.2.4 needed to be tuned in this study. Image size and parameters used in each airport image are shown in
Table 2.
The similarity threshold T decides the effect of the chevron marking detection and the direction of line segment detection in subsequent runway boundary extraction. A relatively small value of T can detect more chevron markings which indicate the position and orientation of runways, but it may detect more false chevron markings and reduce the accuracy of runway extraction. However, a larger value of T may lead to missed detection of chevron markings and missed extraction of runways eventually. Generally, for better effects of template matching of chevron marking, the threshold T was no less than 0.4. As for length thresholds L1 and L2, they can both adjust the minimum length of detected line segments and assisting line segment connection with other fixed thresholds, such as the minimum threshold (100) of collinear points in the accumulating space and the maximum threshold of the gap between two line segments. In some airports, the runway edge marking is not complete and continuous, so the maximum threshold of the gap in two line segment detection was set to 75 and 25 m, respectively. Furthermore, the length threshold L1 was set to 200 m to detect more line segments of runway edges and connected those interrupted line segments. However, the smaller value of L1 may detect line segments of background objects with line feature; thus, in general, the length threshold L1 was set to 400 m. The length threshold L2 aimed to detect more complete runway edge lines, so L2 was set to be 800 m, which is the minimum length of airport runways. In a few cases, such as Images 3, 6, and 7, background objects have salient line feature and shape feature similar to that of runways, so L2 was set to 1200 m, the minimum length of common airport runways.
3.2. Experimental Results and Comparison with State-of-the-Arts
In this study, all experiments were implemented by Python 3.5.6 on a notebook computer with Intel Core i7-8550U 1.80GHz CPU, 8G RAM, and Windows 10 operating system. To validate the performance of the proposed method on a variety of test images, we divide the runway change analysis experiment into two parts, namely Experiments I and II. The former mainly focused on the accuracy and efficiency of the proposed runway extraction method, and the latter attached more importance to the advantages of our runway change analysis method over common change detection methods.
3.2.1. Experiment I
The results of the proposed runway extraction method for Test Data 1–6, comparison results, and ground truth data are illustrated in
Figure 10. The quantitative results and the computing time of the proposed method and state-of-the-art method [
15] for each test image are respectively given in
Table 3,
Table 4 and
Table 5.
From chevron marking detection results in
Figure 10a, most of chevron markings for runways are detected accurately although some are missed, such as Image 6, and some are matched imperfectly, such as Images 4 and 5. The reason for missed detection may be the absence of corresponding grayscale templates of chevron markings, whereas the mismatching is largely due to the existence of the interference surrounding chevron markings or geometric distortion in the test image. Fortunately, the proposed runway extraction method is robust and runway boundaries can be still extracted even if chevron marking detection results are not fully consistent with the positions of real chevron markings. Furthermore, thanks to runway boundary extraction based on interference filter, airport runways are extracted more completely in the condition of missed detection of chevron markings.
In general, the completeness of our proposed method is very high, although part of runway shoulders and stopways for runways are also extracted. In Images 2, 3, and 6, the quality of runway extraction is above 90%, while the correctness of the proposed method for Images 1, 4, and 5 is not as good as that of previously mentioned images. In particular, for Images 3 and 5, their quality is the highest at 92.9% and the lowest at 78.8%, respectively, as shown in
Table 3. The main reason that the proposed method in Images 1 and 5 behaves not well is due to the mathematical morphological processing in runway boundary extraction based on interference filter, which makes extracted runway boundary wider than real runway boundary. For Image 4, the smallest gap of line segment detection in runway boundary extraction based on chevron marking detection is larger than the spacing between two chevron marking stripes; thus, part of stopways painted with chevron markings are extracted.
Compared with the state-of-the-art method [
15], which aims at airport runway extraction from high-resolution images based on line-finder level set evolution, the correctness of our method is much higher and our method is more effective and robust. As can be seen in
Figure 7c, many non-runway objects, such as taxiways, boarding aprons, terminal buildings, and so on, are also extracted. Moreover, except for Images 1, 4, and 6, the completeness of runway extraction in other images is very low due to the low brightness of runway areas or the great brightness difference between runway areas and surrounding areas. Thus, runway extraction using only one or two features of runways is impractical and not robust.
Furthermore, as can be seen in
Table 5, the computing efficiency of our method is higher than that of state-of-the-art method [
15], although the size of test images is very large. The main reason that the computing performance of [
15] is not good is because time complexity of the methods based on level set evolution is nonlinear, and it is also affected by the image brightness or image grayscale which are distributed uncertainly. In addition, comparatively, our method is more automatic because using the method [
15] may require manual assistance to find runway edge lines which are initial contour curve for level set evolution. Nevertheless, our method can be further improved because a large amount of computing time is spent on chevron marking detection based on grayscale template matching which can be replaced by deep learning methods.
Overall, the proposed method performs well, despite the challenging environmental conditions. Quantitative evaluation in
Table 3 shows that the average completeness of the proposed method for all test images is nearly 100%, while the average correctness and average quality are both nearly 89%. The computing time in
Table 5 shows that the overall efficiency is high although our method can be improved.
3.2.2. Experiment II
The results of the proposed runway change analysis method for Images 7 and 8 are illustrated in
Figure 11 and
Figure 12. To validate the effectiveness of our method, comparison results obtained by change detection method based on level set [
23], object-based change vector analysis (OCVA), visual saliency and random forest [
24], and ground truth data of runway changes obtained by manual vectorization are also presented in
Figure 11 and
Figure 12. The quantitative results of the proposed method and state-of-the-art methods are, respectively, given in
Table 6 and
Table 7. Furthermore, semantic results of runway change were obtained by our method, that is, the number of runways in Images 7 decreased, while the quantity of runways in Images 8 increased.
As can be seen in
Figure 11 and
Figure 12, the runway extraction results and runway change analysis results are matched with the real situation, and the overall quality of the proposed runway change analysis method is quite high, as shown in
Table 6. Comparatively, change detection results are inaccurate and ambiguous. For example, some false changes, such as river and parking apron, and missed changes, such as runway, were detected for Images 7 and 8. The reason for this might be because universal change detection methods are difficult to be applied to all RS images with different ground objects. In particular, change detection for VHR images may need shape feature, spatial feature, or expert knowledge of objects to obtain more accurate results. Additionally, the most important is that change detection results no matter using which comparison method are absence of semantic information, namely the increase or decrease of finer objects, such as runway, rather than airport.
In other words, our method has good performance not only in runway extraction but also in runway change analysis. The average accuracy of our method reaches 87.7%, while state-of-the-art methods perform poorly.
4. Discussion
As far as runway extraction is concerned, the complicated background objects and runway structure in test images actually impose great challenges for runway extraction. For example, as shown in Images 1–4, the boundary between water and land in Image 1, taxiways and parking aprons in Images 2 and 4, and terraced fields in Image 3 can detect many line segments in line segment detection, which may lead to false runway boundary extraction. Thus, our method extracted runway boundaries largely based on chevron marking detection, which is introduced into runway extraction for the first time and can also be applied to runway detection or runway identification. Moreover, complex runway structures, such as Images 5 and 6, can easily cause incomplete extraction of runways because these structures interfere with shape feature of a runway itself. Apart from this, due to the difference in the shooting angle of satellite images or some other reasons, there exists geometric distortion in runway markings, which leads to runway edge markings slightly curved or chevron markings for runways inconsistent with their templates, and eventually some runways may be extracted unsuccessfully. Nevertheless, the runway edge marking helps to identify the runway ends in some cases where chevron markings for runways are absent or detected unsuccessfully. Therefore, runway boundary extraction based on interference filter in our method is essential and indispensable for runway extraction, but, meanwhile, the risk of false extraction of runways becomes higher. This is why merging runway area is necessary in runway extraction.
For runway change analysis, the overall results of our method are credible because the semantic results of runway change analysis are in line with the truth, and the overall accuracy of vector results is over 80%, while it is difficult for common change detection methods to make it. However, one disadvantage for our method is the lower speed of runway change analysis, because our method is based on runway extraction and runway extraction from bi-temporal airport images is time-consuming, which can be further improved by detecting chevron markings of airport runways using deep learning method.
5. Conclusions
In this paper, from the perspective of airport knowledge, we propose a novel airport runway change analysis method to overcome accuracy limitations of runway extraction caused by the complexity of airport background and runway structure, and semantic ambiguity and accuracy limitations of runway changes caused by change detection methods. In our method, we first introduce the runway markings and combine the shape features of runways, such as length/width limitation of runways, with parallel line feature to extract runways. Furthermore, runway change analysis was implemented based on IOU of runway extraction results of bi-temporal airport images and overlap rate of runway increase or decrease defined by us. The two experimental results demonstrate that the proposed runway extraction method has good performance with the average accuracy of nearly 89%, and more accurate runway change results with semantic information were obtained by our method, compared with traditional change detection methods, such as Level set, OCVA, RF, and so on. Most importantly, we found that the chevron marking for runways is critical to identify the spatial position and orientation of runways, and the runway edge marking and multiple features of runways could be as supplementary information to extract accurate runway boundary, which are also foundations for accurate runway change analysis.
In our future work, we will focus on detecting chevron markings by using deep learning methods, such as DRN [
33], to improve the speed of runway extraction. Meanwhile, the process of runway boundary extraction based on interference filter can be omitted because the accuracy of chevron marking detection will be higher and runway boundary extraction will be achieved based on detection results of either chevron marking of runway ends. In addition, at present, we can only acquire qualitative results of runway changes by our method; thus, a more accurate runway change analysis method to obtain specific quantity changes of runways is also in future consideration. Finally, we will expand the airport knowledge base, such as features of terminal buildings, which can be combined with object-based image analysis methods [
34], to detect damages of airport facilities after disaster or conduct pre-disaster and post-disaster change analysis of airport objects.