Next Article in Journal
Remote Sensing for Soil Organic Carbon Mapping and Monitoring
Next Article in Special Issue
Moon Imaging Performance of FAST Radio Telescope in Bistatic Configuration with Other Radars
Previous Article in Journal
Few-Shot Object Detection in Remote Sensing Imagery via Fuse Context Dependencies and Global Features
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Method for Building Contour Extraction Based on CSAR Images

College of Electronic Science and Technology, National University of Defense Technology, Changsha 410073, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(14), 3463; https://doi.org/10.3390/rs15143463
Submission received: 18 May 2023 / Revised: 6 July 2023 / Accepted: 6 July 2023 / Published: 8 July 2023

Abstract

:
Circular synthetic aperture radar (CSAR) can obtain more complete scattering characteristics by observing the target with different azimuth angles. Therefore, extracting the complete structure of the target from CSAR images is of great significance for accurate interpretation. At present, the artificial target extraction based on CSAR images mostly uses anisotropic scattering features. For special targets such as buildings, as the walls and the ground form dihedral corner structures, there are also obvious strong scattering features such as double-scattering lines in SAR images. Therefore, combining the strong scattering features of buildings at specific aspects with anisotropic scattering characteristics at different aspects can obtain better extraction results, and how to extract these features accurately and efficiently is the key point. Based on this, this paper proposes a novel method for building contour extraction based on CSAR images. For strong scattering features, a fast fuzzy C-means (FCM) clustering algorithm was used to extract them. For anisotropic scattering features, aspect entropy was used to characterize the anisotropy degree, and K-means clustering was combined to extract. Finally, a more accurate result is obtained by merging the two feature extraction results. In order to verify the effectiveness and practicability of the proposed method, a lot of measured data acquired by the self-developed airborne L-band and Ku-band CSAR systems were processed. The experiments show that, compared with state-of-the-art algorithms, the proposed method can obtain more accurate results in less time.

1. Introduction

As an active sensor, synthetic aperture radar (SAR) is widely used in many fields due to its advantages of all-day, all-weather and strong penetrating ability. Under traditional linear SAR (LSAR) mode, on the one hand, due to the relief of terrain or the occlusion of tall ground objects, there will be imaging detection blind areas. On the other hand, due to the limited viewing angle, the scattering characteristics of targets can only be obtained in a specific angle range, and information loss may even occur in complex scenes. All of these bring great difficulties to the subsequent radar image interpretation. Different from traditional LSAR, circular SAR (CSAR), as an emerging multi-aspect SAR imaging mode, can realize 360° observation of the target by making the platform move in a circular motion and obtain all-round scattering information of the target [1,2,3,4,5,6], so it can effectively improve the interpretation performance.
Buildings are one of the most common targets in urban areas, and it is very important to obtain accurate structure information about buildings for digital city construction, disaster emergency response, and military reconnaissance. Due to the side-view imaging mechanism of SAR, the scattering of buildings in SAR images can be mainly divided into three categories: overlay, double-bounce scattering, and shadow [7]. At present, building extraction is mostly based on a single LSAR image [8,9,10]. However, there are some problems when using images from a single aspect, such as incomplete scattering information or being unable to suppress the interference of other ground targets. In addition, when selecting a single image from different aspects for information extraction, the accuracy is also different [11]. Therefore, some scholars proposed to process multiple SAR images and make full use of complementary information from different aspects to achieve more accurate extraction [12,13]. However, these methods usually fuse the fixed features without considering the changing features of buildings in different images. In addition, these methods are usually based on the assumption that the adjacent walls of buildings form L-structures, and the extraction effect is good only for buildings with regular shapes.
As mentioned above, CSAR can effectively avoid building scattering information being blocked or interfered with due to its ability to obtain target omnidirectional scattering information, thus greatly improving the ability to extract buildings. However, the complexity of backscattering makes it difficult to extract and integrate information from multi-aspect images. When the observation aspect changes slightly, the information contained in different images also changes, mainly including changes in target scattering intensity or other features and background changes caused by speckles or shadows, so it is difficult to apply uniform descriptors to the targets in multi-aspect images. Therefore, how to extract and utilize changing information is the key to multi-aspect SAR image interpretation.
At present, the extraction methods for artificial targets are mostly based on polarimetric CSAR images to analyze anisotropic scattering characteristics. Li et al. proposed a new anisotropic detection method based on likelihood ratio ranking and constant false alarm rate detection [14]. Xue et al. proposed multi-aperture polarimetric entropy (MAPE) using polarimetric CSAR [15]. Compared with traditional polarimetric entropy, MAPE not only contains the randomness of the polarization but also contains the changes at different azimuth aspects, so it can not only distinguish anisotropic and isotropic targets but also further distinguish isotropic targets with different polarization randomness. On this basis, Tan et al. used MAPE to realize the extraction of land bridges [16]. However, the acquisition of polarization information requires higher requirements on system hardware, and more data need to be processed, so it is time-consuming.
For target interpretation of single-polarization CSAR images, the current methods are mainly to extract single features to describe targets. By analyzing the radar cross-section (RCS) curve of the target at different azimuth aspects, Zhao et al. extracted target azimuth-angle sensitivity [17], which can be used to distinguish targets with different structures. Teng et al. proposed aspect entropy to quantify the anisotropy of targets at different aspects and a denoising method based on energy concentration [18]. In addition, Teng et al. also proposed a scattering characteristic analysis method based on a statistical distribution model [19,20]. Firstly, a suitable distribution model was used to fit the SAR image, and then the parameters of different sub-aperture image distribution models were estimated, and finally, the likelihood ratio was used to distinguish anisotropy and isotropy so as to realize man-made target extraction. At the same time, an algorithm for calculating the maximum scattering direction of target pixels was also designed, which is conducive to further interpretation of the target structure. Since aspect entropy is greatly affected by noise, Yue et al. proposed a low-rank matrix decomposition preprocessing method [21] and combined it with the neighborhood operator to make the result have better noise immunity. However, a single feature can only describe the target from a certain aspect, and the potential to extract the target is limited. Therefore, some scholars adopted the method of fusing multiple features to achieve a more complete target extraction. For example, Yue et al. used statistical distribution and membership to extract anisotropic scattering features and amplitude features of man-made targets, respectively [22], but the statistical distribution parameters of this method are difficult to estimate, and the sliding window size of the distribution model needs to be selected manually. Liu et al. first extracted multiple feature variances to obtain a comprehensive description of the target change pattern, then obtained finer feature vectors through principal component analysis, and finally used support vector machines to achieve automatic extraction of building areas in complex scenes [23]. However, this method focuses on the extraction of changing areas; it cannot suppress the changing clutter near the buildings, so the extracted area boundaries are not accurate.
For objects such as buildings, on the one hand, the walls and the ground form dihedral corner structures, so there are strong responses in some specific aspects. On the other hand, buildings also show anisotropy at different azimuth aspects. Therefore, the combination of these two characteristics can effectively overcome the limitations of a single feature. While the existing methods mainly have two problems: first, the complexity of the method for feature extraction is very high, which is not conducive to widespread use; second, the existing fusion method cannot balance different features well, which makes it easy to make the final result greatly affected by one certain feature, resulting in low extraction accuracy. Therefore, a novel method to extract building contours based on CSAR images is proposed in this paper. The method uses two channels to extract the features and merges the results to achieve a better extraction effect, which is verified by measured data to improve both accuracy and efficiency.
The content of this paper is arranged as follows: Section 2 mainly introduces CSAR imaging geometry and the main scattering characteristics of buildings in SAR images; Section 3 introduces the processing flow of the proposed method. In Section 4, the proposed method is used to process the measured data acquired by airborne L-band and Ku-band CSAR systems. Finally, Section 5 concludes the paper.

2. Scattering Characteristics

Figure 1 shows the geometric diagram of CSAR imaging, where xoy is the imaging horizontal plane and oz is the height direction. The airborne platform moves in a circular trajectory with radius R on the plane with height H. During the motion, the beam always points to the imaging center region so as to obtain the omnidirectional scattering information of the observed target [24].

2.1. Strong Scattering at the Single Azimuth Aspect

Based on the SAR side-view imaging mechanism, for objects with a certain height, such as buildings, there are mainly overlay, shadow, and double-bounce scattering effects [7]. Among them, overlay mainly occurs in the building wall towards the direction of electromagnetic wave incidence; double-bounce scattering generally occurs in the dihedral corner formed by the wall and the ground; and shadow mainly occurs in the side of the building away from the sensor.
In order to visually display the characteristics of buildings in SAR images, take a cube, for example, to analyze the main scattering mechanism of flat-roofed buildings, and its profile diagram is shown in Figure 2. The incidence angle is θ. a is the ground scattering. a c d is the overlapping area formed by ground scattering, sensor-facing wall scattering, and partial roof scattering, which is shown as a block or strip with high brightness. b is double-bounce scattering and appears as a straight line with very high brightness. d is the single scattering of roof. e is the shadow area, and due to specular reflection, parts of the roof often appear as shadows as well. It is important to note that the order in which scattering effects appear in the projected image is not unique, for example, when the building is tall, the single scattering of the roof will be completely contained within the overlay area.
Double-bounce scattering is usually represented as a significant highlighted line in high-resolution SAR images, and the position corresponds to the boundary of the building, so it is one of the most important features for building detection. The following will focus on the analysis of double-bounce scattering.
The double-bounce scattering path can be assumed to consist of three parts, including R 1 , R 2 and R 3 [25], as shown in Figure 3. Assume that the height of the radar is H , the vertical projection of radar on the ground is O , the distance between the projection and dihedral corner is R g , the distance between the phase center of antenna and dihedral corner is R 0 . After the incident wave passes through R 1 and reaches a scattering center at the height Δ h of the building, there is a scattering aspect between the first reflection and the specular reflection aspect θ e , denoted as Δ θ , after the second reflection with ground along the scattering aspect, it returns to the receiving antenna. It is worth noting that the length of R 2 and R 3 is not unique but varies with the scattering aspect, as shown in the green shaded area. The distance between the center of the shaded area and the dihedral corner is L . Based on the model, we can obtain
θ e = atan H Δ h / R g R 1 = H Δ h 2 + R g 2 R 2 = Δ h / sin θ e + Δ θ L = Δ h / tan θ e + Δ θ R 3 = R g L 2 + H 2 R 0 = R g 2 + H 2
Therefore, the difference between the double-bounce scattering echo path and 2 R 0 is
Δ R = R 1 + R 2 + R 3 2 R 0
The difference can be calculated when the image geometry is determined. Usually, the height of the building is far less than the height of the radar, so Δ R can be ignored, i.e., R 1 + R 2 + R 3 2 R 0 . Therefore, all the double-bounce scattering energy will be projected at the intersection of the wall and the ground, which will appear as a highlighted line in the SAR image. In addition, when the image plane is consistent with the ground plane of the building, the bright lines obtained by multi-aspect SAR images will form a closed rectangle. Figure 4 shows the imaging result of the buildings in an L-band full aperture image, and the closed rectangle corresponds to the building contour.

2.2. Anisotropic Scattering at the Different Azimuth Aspects

In conventional LSAR mode, the target is usually regarded as isotropic due to the limited viewing aspect. However, as the observation aspect increases, the assumption of isotropy is no longer valid. From another point of view, anisotropy can also be extracted as a discriminant feature.
In the real world, most man-made targets, including buildings, are anisotropic. Only part of the structure of buildings can be observed under different observation aspects, as shown in Figure 5. Therefore, the gray level of buildings changes under different azimuth aspects.
CSAR can obtain the scattering feature changes of targets at different aspects through omni-directional observation from 0° to 360°, and the anisotropy degree of different targets is usually different. Therefore, it will be beneficial for target recognition to extract anisotropic scattering features by using CSAR images.
Although anisotropic scattering feature extraction has attracted much attention in recent years, there are few research methods [21]. As mentioned above, current anisotropic scattering analysis based on CSAR images is mostly combined with polarization information, while based on single-polarization SAR images, most methods use different statistical distribution models for analysis; that is, with the change of observation aspect, the probability density function (PDF) of an anisotropic target is different, while the PDF of an isotropic target is basically stable. By establishing binary hypotheses and using the likelihood ratio test, anisotropic or isotropic targets can be extracted by threshold [19,20,22]. However, such methods mainly have the following shortcomings: (1) For SAR images with complex scenes, the common distribution model cannot fit well, resulting in inaccurate extraction results. Although complex models can fit images better, it is difficult to achieve accurate parameter estimation. (2) Pixel-by-pixel sliding is needed to obtain statistical characteristics of different sub-aperture images, which requires a relatively large amount of computation, and the window size needs to be selected manually.
Similar to polarization information, in single-polarization SAR images, amplitude information of targets under different aspects can also be used to extract features. If the relation curve of RCS amplitude varying with aspect is known, the scattering characteristics of the target can also be analyzed, and whether the target is anisotropic or isotropic can be judged. That is, the amplitude of an anisotropic target changes obviously under different aspects, while the amplitude of an isotropic target is basically unchanged.

3. The Proposed Method

As mentioned above, in high-resolution SAR images, buildings not only have strong scattering features such as secondary scattering bright lines at specific aspects but also show anisotropy at different azimuth aspects. A single feature usually describes the target only from one aspect, and the ability to extract targets is limited. However, too many features are easy to cause heavy computation, especially when there are many sub-apertures that need to be processed. Therefore, how to extract these features quickly and accurately is the key to multi-aspect SAR image interpretation. Considering the accuracy and efficiency of the algorithm, a novel method for building contour extraction based on CSAR images is proposed. The algorithm first filters all sub-aperture images and then extracts features based on the filtered images, considering strong scattering features of the building under specific aspects and anisotropic scattering features under different aspects simultaneously, and uses fuzzy C-means clustering (FCM) and aspect entropy to extract respectively, and finally fuses the results of two channels to achieve better extraction.
Figure 6 shows the processing flow of the proposed method.
Detailed steps are as follows:
  • Divide CSAR echo data to get sub-aperture images. Firstly, CSAR echo data are divided into multiple sub-aperture complex data [2], and the resolution of the sub-aperture azimuth angle should be smaller than the azimuthal span of the building to ensure that the scattering characteristics can be resolved [11]. The azimuth resolution of the sub-aperture image is
    ρ a = λ 4 sin θ / 2
    where λ is the wavelength and θ is the size of the sub-aperture angle.
    Then all the sub-aperture data are imaged by the back projection (BP) algorithm. Since the BP algorithm needs to process all grids one by one and requires a large amount of computation, GPU parallel processing can be adopted to improve imaging efficiency [2].
  • Preprocess. Sub-aperture images directly obtained from echo data usually contain a lot of noise. In order to ensure the accuracy of subsequent extraction, it is necessary to filter each sub-aperture image.
  • Feature extraction. On the one hand, a fast FCM combined with spatial neighborhood information is used to process every sub-aperture image. The membership degree of each pixel to the strong scattering category in different sub-aperture images is obtained, and the strong scattering point of the building at specific aspects is extracted by threshold. On the other hand, aspect entropy is used to calculate the degree of anisotropy of each pixel in all sub-aperture images, and K-means clustering is used to extract the class with lower aspect entropy as potential building pixels.
  • Results fusion. The extraction results of the above single feature are fused, and only the pixels satisfying both strong scattering characteristics at specific aspects and anisotropy at different azimuth aspects are retained as the final extraction results.
Next, the extraction methods and steps of the two features are introduced in detail.

3.1. A Fast FCM Algorithm

For objects such as buildings, due to the dihedral corner formed by the wall and ground, strong scattering features such as double-bounce scattering bright lines (excluding occlusion) exist in SAR images. In this paper, strong scattering points in sub-aperture images are extracted by the FCM algorithm combined with spatial neighborhood information, and the membership degree of each pixel under different sub-aperture images is used as discrimination information.
As an unsupervised fuzzy clustering method, FCM obtains the membership matrix by minimizing the objective function, and finally obtains the segmentation result according to the maximum membership criterion. The objective function expression is
J = i = 1 N k = 1 c u k i m x i v k 2
where N is the number of pixels. c is the number of categories. u k i denotes the membership degree of pixel i to cluster k , satisfying k = 1 c u k i = 1 . m is the fuzzy factor. x i is the value of the ith pixel and v k is the prototype value of the kth cluster.
By definition, the FCM algorithm combines fuzzy theory with clustering and can retain original image information as far as possible, so it is widely used in SAR images. However, it does not consider spatial information, and the results are usually greatly affected by noise. Therefore, a large number of improved algorithms have appeared. These algorithms usually introduce spatial neighborhood information into the objective function [22], and the improved objective function is usually shown as follows:
J = i = 1 N k = 1 c u k i m x i v k 2 + i = 1 N k = 1 c G k i
where G k i is the fuzzy factor, which is used to control the influence of neighborhood pixels on central pixels.
Different G k i usually corresponds to different improved algorithms. For example, in FCM_S [26], G k i is defined as
G k i = α N R u k i m r N i x r v k 2
where α represents the coefficient used to control the influence of the neighborhood term. N R is the cardinality of u k i . x r is the neighborhood of x i , and N i is the set of neighborhoods.
Due to the introduction of G k i , the improved algorithms have better anti-noise performance but also higher computational complexity, which is mainly reflected in the calculation of distance between a large number of neighborhood pixels and the clustering center in G k i . When the number of sub-aperture images is large, the efficiency of the algorithm will not meet the requirements. Therefore, considering the segmentation accuracy and efficiency, this paper adopts a fast FCM algorithm combining spatial neighborhood information, which mainly includes key steps such as morphological reconstruction, histogram clustering, and membership filtering [27].
  • Morphological reconstruction
Among all kinds of improved FCM algorithms that introduce spatial neighborhood information, FCM_S1 and FCM_S2 [28] are relatively fast because they can calculate filtered images in advance, but they are only effective for certain types of noise. Compared with commonly used filtering algorithms, morphological reconstruction can suppress different types of noise and retain edge information better [27], and the running time is short. This paper uses morphological closing reconstruction to achieve filtering, and the expression is
R C f = R R f δ ε f ε δ R f δ ε f
where f is the original image, ε is the corrosion operation, δ is the expansion operation, and R C is the morphological closing reconstruction.
  • Histogram clustering
In order to improve computing efficiency, enhanced FCM [29] selects clustering on gray histograms. Since the number of gray levels in a histogram is usually much less than the number of pixels, histogram clustering can greatly reduce the calculation time. Similar to enhanced FCM, the objective function of the FCM used in this paper is
J = l = 1 q k = 1 c γ l u k l m ξ l v k 2
where q is the histogram gray levels. γ l represents the number of pixels whose gray level is l , satisfying l = 1 q γ l = N . u k l represents the membership degree of pixels with gray level l to cluster k . ξ is the reconstructed image, i.e., ξ = R C f .
Using the Lagrange multiplier method to obtain the iteration formula for membership degree and cluster center, which is shown as follows:
u k l = ξ l v k 2 m 1 j = 1 c ξ l v j 2 m 1
v k = i = 1 q γ l u k l m ξ l i = 1 q γ l u k l m
The obtained membership matrix based on histogram clustering is U = u k l c × l , and further processing is required to restore the membership matrix of each pixel to different clustering centers, i.e., U = u k i c × N . The specific corresponding relationship between them is as follows:
u k i = u k l ,   if   x i = ξ l
  • Membership filtering
It can be seen from the above analysis that most of the improved algorithms achieve better noise resistance by introducing spatial neighborhood information into the objective function. However, this also results in a large amount of computation. If spatial neighborhood information can be introduced without changing the form of the objective function, efficiency will be greatly improved. Relevant literature has proved that membership filtering is similar to introducing local spatial neighborhood information [27], and membership filtering no longer needs to calculate the distance between pixels in the local neighborhood and the clustering centers, thus greatly reducing the computational complexity. Therefore, this paper introduces spatial neighborhood information through membership filtering and finally adopts the expression as follows:
U = med U
where med · represents median filtering.
In order to further speed up the algorithm, only the last membership matrix is filtered.
Through analysis, the paper uses the FCM algorithm combined with spatial neighborhood information to quickly segment all sub-aperture images and obtain the membership degree of each pixel in the strong scattering class.

3.2. Aspect Entropy

Since the scattering characteristics of the target are aspect dependent, multi-aspect observation will be beneficial to analyze anisotropic scattering characteristics. It can be seen from Section 2.2 that anisotropic scattering features of single-polarization SAR images are extracted mainly by using the changes of distribution models under different aspects, and there are still many problems when using these methods.
In order to quantify the degree of anisotropy of a pixel under different sub-aperture images, a natural association is to introduce the concept of entropy. Entropy was first used to define disorder in physics and was subsequently generalized to other fields. For example, Shannon defined information entropy to describe the uncertainty of information sources. In electromagnetics, polarimetric entropy is used to define the randomness of polarization. When the polarimetric entropy is low, it can be considered that there is only one dominant scattering mechanism; otherwise, it is considered that the scattering mechanism tends to be random. Just as the polarimetric entropy is defined to describe the scattering randomness of the target under different polarization modes, the scattering randomness of the target under different aspects can also be defined, i.e., aspect entropy [18].
Suppose I i , j , k represents the gray value of the pixel i , j in the kth sub-aperture image, and i , j is the coordinate of the pixel. The pseudo-probability of scattering in the kth sub-aperture image is calculated as follows:
P i , j , k = I i , j , k k = 1 n I i , j , k
where k = 1 , 2 , , n represents the number of the sub-aperture images.
Then, the definition of aspect entropy is
H a i , j = k = 1 n P i , j , k log n P i , j , k
According to the definition, aspect entropy is inversely proportional to P i , j , k . For anisotropic targets, the scattering is usually strong at some azimuth aspects, so the aspect entropy is relatively lower. Conversely, for isotropic targets, aspect entropy is usually higher. Therefore, this paper uses K-means to divide aspect entropy features into two categories and extracts the category with lower aspect entropy as potential building pixels. Obviously, the calculation of aspect entropy is very simple, and the angle dimension information in multi-aspect SAR images can be quickly extracted by this definition. Compared with the existing likelihood ratio detection methods [19,20,22], it is more conducive to practical application. Since aspect entropy is calculated directly by the gray level of the image, it is greatly affected by noise. In order to achieve more accurate extraction, it is necessary to filter the sub-aperture images. Considering the accuracy and efficiency, this paper also selects morphological closing reconstruction.
Through the above analysis, the proposed algorithm considers inherent strong scattering features of buildings in every sub-aperture and anisotropic scattering features in different sub-apertures and adopts a simple and fast method to realize the extraction of the two features accurately, so it can achieve more complete building extraction and solve the problem of high false alarms in the existing methods.

4. Experimental Results and Analysis

4.1. Introduction of Measured Data

In order to verify the effectiveness and practicability of the proposed method, the CSAR measured data of L-band and Ku-band were processed, which were obtained by the airborne CSAR systems independently developed by the National University of Defense Technology. The experimental data were collected in 2020, and the experimental location was Weinan City, Shaanxi Province. In order to illustrate the universality of the proposed method, three scenarios containing different types of buildings are selected.
Taking Scene 1 as an example, the processing steps and result analysis of the method will be explained in detail. The flight altitude is 1.9 km, the flight radius is 2.346 km, the band is L-band, the polarization is HH, and the resolution is 0.5   m × 0.5   m . In Figure 7a, the red curve is the flight trajectory of the SAR system, and the yellow rectangle is the selected area. Figure 7b,c shows the optical image and CSAR image of this region, respectively. The number of sub-aperture images is 84.

4.2. Strong Scattering Feature Extraction

For strong scattering features, the FCM algorithm is used for rapid extraction. Since this paper focuses on the extraction of building contours, corresponding to the bright lines of double-bounce scattering, the number of categories is set to 3.
In order to more intuitively show the difference between buildings and non-buildings, two pixels are randomly selected. The corresponding positions of non-building (land) pixel A and building pixel B in an optical image are shown in Figure 8, and their amplitude curves and membership curves belonging to strong scattering in different sub-aperture images are given in Figure 9, where the orange curve represents a building pixel and the blue curve represents a non-building pixel.
As can be seen from Figure 9, the membership curve is better than the amplitude curve because it not only retains the variation of amplitude data but also does not change with the variation of amplitude under some strong scattering aspects.
Finally, by setting a threshold for membership degree, pixels with a high membership degree of a strong scattering class are retained. Through experiments, the extraction effect is better when the threshold is 0.7, and the extraction result of a strong scattering feature is shown in Figure 10.
As can be seen from the result, due to the dihedral corners formed by the walls and the ground, building edges appear as bright lines in SAR images. By extracting strong scattering points at specific aspects through FCM, the building contours are basically extracted. In addition, due to the presence of trees and other vegetation in the scene, their trunks and the ground also form dihedral corners, which also show strong scattering characteristics, so they are shown as false alarms in the result.

4.3. Anisotropic Scattering Feature Extraction

For anisotropic scattering features, aspect entropy is used for quantification and extraction. The algorithm first uses morphological closing reconstruction to filter all sub-aperture images, then calculates aspect entropy by definition, and finally uses K-means clustering to screen out the class with lower aspect entropy. The size of the reconstruction operator is 3, and the result is shown in Figure 11.
In order to more intuitively reflect the effect of filtering, select the same pixels marked in Figure 8, and the change curves of their gray values before and after filtering in different sub-aperture images are given, as shown in Figure 12.
According to Figure 12, the morphological closing reconstruction can suppress noise while preserving the scattering characteristics of the target. Through closing reconstruction, the strong scatterings of pixels at certain aspects are preserved, while the weak scatterings at other aspects are suppressed to some extent.
Based on the above analysis, it can be seen that by calculating the change in the gray value of pixels in different sub-apertures, aspect entropy can extract anisotropic pixels. However, since it only starts with the change in the gray value, there are many false alarms. For example, for the strong interference near the building in the lower right corner of Scene 1, the change in the gray value also presents anisotropy with the change in aspects, as shown in Figure 13. Therefore, aspect entropy cannot suppress these pixels, resulting in high false alarms in the final result.

4.4. Multi-Feature Extraction Results Fusion

As can be seen from the analysis of the above results, for strong scattering features, the gray value of vegetation in the upper right corner of Scene 1 is also high, which presents false alarms. For anisotropic scattering features, the extraction results are easily interfered with by the adjacent pixels of the building contour because the calculation is only based on amplitude fluctuation. It can be seen that a single feature usually describes the target only from a certain aspect, and there are often many false alarms in the results. However, the existing feature fusion methods cannot balance the weights of different features well [22], and it is easy to make the final result greatly affected by one of the features so the detection effect is not very good. In order to solve this problem, a new fusion method is proposed based on a comprehensive analysis of the above two features. By fusing the above features, only pixels that exhibit strong scattering at specific angles and anisotropy at different angles are retained. The final fusion result is shown in Figure 14a.
In order to further illustrate the advantages of the proposed method, the differences in extraction effects between the proposed method and existing methods are analyzed below. The methods in [19,20,22] are selected as comparisons, and the results are shown in Figure 14b–d.
It can be seen from Figure 14 that through the proposed method, the contours of the buildings are completely extracted, which can effectively reflect real structures, and compared with the extraction result of a single feature, the result after fusion has fewer false alarms. The red ellipse in Figure 14a is an artificial metal fence because it also has strong scattering and anisotropy, it is manifested as a false alarm in the result, but the structure is clearly different from the building, which can be removed by post-processing. For the method in [19,20], since only anisotropic scattering features are used, the extraction results are similar to the aspect entropy extraction results used in this paper, and the adjacent buildings will be connected and difficult to distinguish. Moreover, the strong interference near the buildings cannot be suppressed, and there are many false alarms, so the contours of the buildings cannot be extracted. For the method in [22], the extraction effect for the buildings is better; however, due to the amplitude weighting factors in the fusion mode, vegetation pixels with strong scattering in the scene will also be detected, although they are isotropic, so the false alarm is higher.
In addition to the above comparison method, there are many other building extraction methods based on multi-aspect SAR images. These methods usually only utilize complementary information from a few specific aspects. Therefore, the following deficiencies exist. On the one hand, such methods have requirements for the angle interval of sub-aperture images and are only valid for common regular rectangular or parallelogram structures [30]. For interconnected buildings with irregular contours, complex structures cannot be accurately extracted. As shown in Figure 15, the angle difference between the two selected sub-aperture images is about 180°. Only using these two sub-aperture images can we extract the complete contour of the complex structure in the red ellipses. On the other hand, the final extraction effect of such methods completely depends on the integrity of the target scattering features in the selected images. When the features of the target are not obvious, such as the building in the green ellipses, extraction cannot be realized. After comprehensive analysis, the extraction effect of these methods depends on the quality of the selected sub-aperture images, and the manual selection of different images will inevitably introduce the extraction accuracy difference, so the detailed comparative analysis of this kind of method is no longer carried out.
In order to further illustrate the advantages of the proposed method, a quantitative comparative analysis is carried out below, and three indexes, including detection rate, false alarm rate, and accuracy, are selected [10]. Their calculation formulas are as follows:
D R = T P F N + T P
F A R = F P F P + T P
A C = T P + T N F P + T P + F N + T N
where TP means true positive and represents the real building that is detected as a building, FP means false positive and denotes the non-building that is detected as a building, TN means true negative and represents the real non-building that is detected as a non-building, FN means false negative and denotes the building that is detected as a non-building.
Through morphological processing of the extracted results, the building areas are obtained, as shown in Figure 16. Finally, the calculated indicators are shown in Table 1.
From the comparison results, it can be seen that the method in this paper has a better suppression effect on different types of interference because two features are considered at the same time, and the extracted areas can well correspond to the real areas of the buildings. However, the method in [19,20] cannot accurately extract the edge structure for the interconnected buildings, nor can it suppress the strong interference near the buildings. The method in [22] cannot suppress strong scattering interference either. Therefore, the extracted areas obtained from these methods are quite different from the real areas, and the geometric information cannot be accurately extracted.
In order to verify the potential of the proposed method in building contour information extraction, taking Scene 1 as an example, the building contour information can be obtained through skeleton extraction, skeleton tracking, and least squares fitting. The building and contour marks are shown in Figure 17, and the real geometry information and the measurement results are shown in Table 2.
It can be seen that, unlike the method in [11], which directly uses the full aperture image to extract the length and width of the building, the sub-aperture images under different angles can also achieve good information extraction. Among them, the wall on the left side of building B2 shows a missing alarm because the strong scattering feature is not obvious, so the length extraction error is large. Because of the strong interference near the wall of the B6 building, the extracted bright line is wider, so the width extraction error of the B6 building is also large. Through calculation, the average error of length extraction and width extraction is 4.71% and 5.98%, respectively.
In order to further illustrate the applicability of the proposed method, another two scenarios are tested, in which Scene 2 is Ku-band and Scene 3 is L-band. Since the extraction process is exactly the same as in Scene 1, only the corresponding results are given below.
As can be seen from Figure 18, the buildings in Scene 2 are large, dense factory buildings. Because the low vegetation is isotropic, it can be distinguished from anisotropic buildings. As shown in Figure 18f, the proposed method can not only detect the contours of the buildings but also the roof boundaries for connected buildings, so as to further improve the interpretation of the target structure. However, the results of [19,20] cannot suppress the interference near the buildings, and there are many false alarms, as shown in Figure 18g,h. The final result of the method in [22] also contains vegetation false alarms, and the outline of the building is not very clear, as shown in Figure 18i. The final region extraction accuracy is shown in Table 3. The geometry information is shown in Figure 19, and the geometric information extraction results are shown in Table 4. It can be found that the proposed method can achieve accurate extraction of geometric information from large buildings with a relative error within 5%.
Different from the low-rise buildings in Scene 1 and Scene 2 (The height is about 6 m), the buildings in Scene 3 are residential buildings with higher floors (The height is about 20 m). As for the result in Figure 20c, there are false alarms in the red ellipse. In Figure 20d, since the buildings are tall, phenomena such as overlay and top-to-bottom inversion are more obvious in SAR images, resulting in different imaging positions of the buildings under different aspects, so more false alarms are shown in the result. The red ellipses are part of the false alarms caused by the special structures of the roof. These false alarms can be easily suppressed by the fusion method proposed in this paper, as shown in Figure 20e. In [19,20], the gray changes caused by overlaying cannot be suppressed, so there are many false alarms, as shown in Figure 20f,g. Although the method in [22] can detect buildings well, there are many other false alarms between buildings, and they are hard to remove in subsequent processing. Since Scene 3 cannot obtain accurate geometric information, only the region extraction accuracy is given, as shown in Table 5. It can be found that, compared with existing methods, the proposed method can also achieve more accurate extraction.
Through the analysis of the experimental results of L-band and Ku-band measured data, it can be found that the proposed method can achieve better extraction effects than the existing methods and is universal to buildings of different heights and types. In terms of building geometric information extraction, different from directly processing a full aperture image, the proposed method can also achieve high precision length and width information extraction by using sub-aperture images.

4.5. Algorithm Complexity Analysis

In order to evaluate the potential application of the proposed method in SAR processing, the algorithm complexity needs to be further analyzed. Since the proposed method consists of two parts, the complexity of each part is analyzed separately. In order to more intuitively demonstrate the advantages of the proposed method, the following is a comparison of computational complexity between the proposed method and the method in [22]. The analyses are shown in Table 6.
Finally, the calculation time of the proposed method and the method in [22] for different scenarios is analyzed. The results are shown in Table 7. Obviously, the proposed method is significantly faster.

5. Conclusions

Aiming at the problem that a single feature cannot describe the object well and that the existing multi-feature methods have high computational complexity, a novel method for building contour extraction based on a CSAR image is proposed. By using a fast FCM algorithm to extract the strong scattering characteristics of specific aspects and using the aspect entropy to extract the anisotropic scattering characteristics of different aspects, the proposed method can achieve better building contour extraction. At the same time, different from signal-level processing methods, the proposed method is an image-level processing method that can be processed completely based on the images, so it has great potential in practical application. Compared with traditional methods, the method can suppress different types of false alarms well and extract more accurate contour structures with less time, which is conducive to accurate geometric information extraction. Finally, by processing the measured data of L-band and Ku-band obtained independently, it is proved that the proposed method not only has good extraction effects on different types of buildings but is also easy to realize in practical application.

Author Contributions

Conceptualization, J.Z., D.A. and L.C.; methodology, J.Z.; software, J.Z.; validation, J.Z.; formal analysis, J.Z.; investigation, J.Z.; resources, D.A. and L.C.; data curation, D.A. and L.C.; writing—original draft preparation, J.Z.; writing—review and editing, J.Z., D.A. and L.C.; visualization, J.Z.; supervision, D.A. and L.C.; project administration, D.A. and L.C.; funding acquisition, D.A. and L.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Natural Science Foundation for Distinguished Young Scholars of Hunan Province under Grant 2022JJ10062, and the National Natural Science Foundation of China under Grants 62271492, 62101566, and 62101562.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chen, L.; An, D.; Huang, X. A Backprojection-Based Imaging for Circular Synthetic Aperture Radar. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 3547–3555. [Google Scholar] [CrossRef]
  2. Chen, L.; An, D.; Huang, X. Resolution Analysis of Circular Synthetic Aperture Radar Noncoherent Imaging. IEEE Trans. Instrum. Meas. 2020, 69, 231–240. [Google Scholar] [CrossRef]
  3. Feng, D.; An, D.; Chen, L.; Huang, X. Holographic SAR Tomography 3-D Reconstruction Based on Iterative Adaptive Approach and Generalized Likelihood Ratio Test. IEEE Trans. Geosci. Remote Sens. 2021, 59, 305–315. [Google Scholar] [CrossRef]
  4. Feng, S.; Lin, Y.; Wang, Y.; Yang, Y.; Shen, W.; Teng, F.; Hong, W. DEM Generation with a Scale Factor Using Multi-Aspect SAR Imagery Applying Radargrammetry. Remote Sens. 2020, 12, 556. [Google Scholar] [CrossRef] [Green Version]
  5. Yue, X.; Teng, F.; Lin, Y.; Hong, W. Target Anisotropic Scattering Deduction Model Using Multi-Aspect SAR Data. ISPRS J. Photogramm. Remote Sens. 2023, 195, 153–168. [Google Scholar] [CrossRef]
  6. Feng, S.; Lin, Y.; Wang, Y.; Teng, F.; Hong, W. 3D Point Cloud Reconstruction Using Inversely Mapping and Voting from Single Pass CSAR Images. Remote Sens. 2021, 13, 3534. [Google Scholar] [CrossRef]
  7. Guida, R.; Iodice, A.; Riccio, D. Height Retrieval of Isolated Buildings from Single High-Resolution SAR Images. IEEE Trans. Geosci. Remote Sens. 2010, 48, 2967–2979. [Google Scholar] [CrossRef]
  8. Liu, B.; Tang, K.; Liang, J. A Bottom-Up/Top-Down Hybrid Algorithm for Model-Based Building Detection in Single Very High Resolution SAR Image. IEEE Geosci. Remote Sens. Lett. 2017, 14, 926–930. [Google Scholar] [CrossRef]
  9. Zou, B.; Li, W.; Zhang, L. Built-Up Area Extraction Using High-Resolution SAR Images Based on Spectral Reconfiguration. IEEE Geosci. Remote Sens. Lett. 2021, 18, 1391–1395. [Google Scholar] [CrossRef]
  10. Li, X.; Su, J.; Yang, L. Building Detection in SAR Images Based on Bi-Dimensional Empirical Mode Decomposition Algorithm. IEEE Geosci. Remote Sens. Lett. 2020, 17, 641–645. [Google Scholar] [CrossRef]
  11. Li, Y.; Chen, L.; An, D.; Huang, X.; Feng, D. A Novel Method for Extracting Geometric Parameter Information of Buildings Based on CSAR Images. Int. J. Remote Sens. 2022, 43, 4117–4133. [Google Scholar] [CrossRef]
  12. Thiele, A.; Cadario, E.; Schulz, K.; Thonnessen, U.; Soergel, U. Building Recognition from Multi-Aspect High-Resolution InSAR Data in Urban Areas. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3583–3593. [Google Scholar] [CrossRef]
  13. Xu, F.; Jin, Y.-Q. Automatic Reconstruction of Building Objects from Multiaspect Meter-Resolution SAR Images. IEEE Trans. Geosci. Remote Sens. 2007, 45, 2336–2353. [Google Scholar] [CrossRef]
  14. Li, Y.; Yin, Q.; Lin, Y.; Hong, W. Anisotropy Scattering Detection from Multiaspect Signatures of Circular Polarimetric SAR. IEEE Geosci. Remote Sens. Lett. 2018, 15, 1575–1579. [Google Scholar] [CrossRef]
  15. Xue, F.; Lin, Y.; Hong, W.; Yin, Q.; Zhang, B.; Shen, W.; Zhao, Y. Analysis of Azimuthal Variations Using Multi-Aperture Polarimetric Entropy with Circular SAR Images. Remote Sens. 2018, 10, 123. [Google Scholar] [CrossRef] [Green Version]
  16. Tan, X.; An, D.; Chen, L.; Luo, Y.; Zhou, Z.; Zhao, D. An Effective Method of Bridge Detection Based on Polarimetric CSAR. In Proceedings of the 2020 21st International Radar Symposium (IRS), Warsaw, Pakistan, 5 October 2020; IEEE: Piscataway, NJ, USA, 2016; pp. 131–134. [Google Scholar]
  17. Zhao, Y.; Lin, Y.; Wang, Y.P.; Hong, W.; Yu, L. Target Multi-Aspect Scattering Sensitivity Feature Extraction Based on Circular-SAR. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 2722–2725. [Google Scholar]
  18. Teng, F.; Hong, W.; Lin, Y. Aspect Entropy Extraction Using Circular SAR Data and Scattering Anisotropy Analysis. Sensors 2019, 19, 346. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Teng, F.; Hong, W.; Lin, Y.; Han, B.; Wang, Y.; Shen, W.; Feng, S. An Anisotropic Scattering Analysis Method Based on Likelihood Ratio Using Circular Sar Data. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 477–480. [Google Scholar]
  20. Teng, F.; Lin, Y.; Wang, Y.; Shen, W.; Feng, S.; Hong, W. An Anisotropic Scattering Analysis Method Based on the Statistical Properties of Multi-Angular SAR Images. Remote Sens. 2020, 12, 2152. [Google Scholar] [CrossRef]
  21. Yue, X.; Lin, Y.; Teng, F.; Feng, S.; Hong, W. Multi-Angular Sar Scattering Anisotropy Analysis Based on Low-Rank Matrix Decomposition. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11 July 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 3420–3423. [Google Scholar]
  22. Yue, X.; Teng, F.; Lin, Y.; Hong, W. A Man-Made Target Extraction Method Based on Scattering Characteristics Using Multiaspect SAR Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 11699–11712. [Google Scholar] [CrossRef]
  23. Liu, Q.; Li, Q.; Yu, W.; Hong, W. Automatic Building Detection for Multi-Aspect SAR Images Based on the Variation Features. Remote Sens. 2022, 14, 1409. [Google Scholar] [CrossRef]
  24. Luo, Y.; An, D.; Wang, W.; Chen, L.; Huang, X. Local Road Area Extraction in CSAR Imagery Exploiting Improved Curvilinear Structure Detector. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5227615. [Google Scholar] [CrossRef]
  25. Chen, L.; An, D.; Huang, X.; Zhou, Z. A 3D Reconstruction Strategy of Vehicle Outline Based on Single-Pass Single-Polarization CSAR Data. IEEE Trans. Image Process. 2017, 26, 5545–5554. [Google Scholar] [CrossRef] [PubMed]
  26. Ahmed, M.N.; Yamany, S.M.; Mohamed, N.; Farag, A.A.; Moriarty, T. A Modified Fuzzy C-Means Algorithm for Bias Field Estimation and Segmentation of MRI Data. IEEE Trans. Med. Imaging 2002, 21, 193–199. [Google Scholar] [CrossRef] [PubMed]
  27. Lei, T.; Jia, X.; Zhang, Y.; He, L.; Meng, H.; Nandi, A.K. Significantly Fast and Robust Fuzzy C-Means Clustering Algorithm Based on Morphological Reconstruction and Membership Filtering. IEEE Trans. Fuzzy Syst. 2018, 26, 3027–3041. [Google Scholar] [CrossRef] [Green Version]
  28. Chen, S.; Zhang, D. Robust Image Segmentation Using FCM With Spatial Constraints Based on New Kernel-Induced Distance Measure. IEEE Trans. Syst. Man Cybern. B 2004, 34, 1907–1916. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  29. Szilagyi, L.; Benyo, Z.; Szilagyi, S.M.; Adam, H.S. MR Brain Image Segmentation Using an Enhanced Fuzzy C-Means Algorithm. In Proceedings of the 25th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (IEEE Cat. No. 03CH37439), Cancun, Mexico, 17–21 September 2003; IEEE: Piscataway, NJ, USA, 2003; pp. 724–726. [Google Scholar]
  30. Zhang, F.; Liu, L.; Shao, Y. Building Footprint Extraction Using Dual-Aspect High-Resolution Synthetic Aperture Radar Images in Urban Areas. J. Appl. Remote Sens. 2012, 6, 063599. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of CSAR imaging geometry.
Figure 1. Schematic diagram of CSAR imaging geometry.
Remotesensing 15 03463 g001
Figure 2. Schematic diagram of scattering characteristics and projection of flat-roofed buildings. (Different gray areas at the bottom of the image represent amplitude).
Figure 2. Schematic diagram of scattering characteristics and projection of flat-roofed buildings. (Different gray areas at the bottom of the image represent amplitude).
Remotesensing 15 03463 g002
Figure 3. Double-bounce scattering of buildings.
Figure 3. Double-bounce scattering of buildings.
Remotesensing 15 03463 g003
Figure 4. Building targets and imaging results at full aperture. (a) Optical image; (b) CSAR image.
Figure 4. Building targets and imaging results at full aperture. (a) Optical image; (b) CSAR image.
Remotesensing 15 03463 g004
Figure 5. Buildings under different sub-aperture images. (a) The 8th sub-aperture; (b) the 18th sub-aperture; (c) the 10th sub-aperture.
Figure 5. Buildings under different sub-aperture images. (a) The 8th sub-aperture; (b) the 18th sub-aperture; (c) the 10th sub-aperture.
Remotesensing 15 03463 g005
Figure 6. Flowchart of the proposed method.
Figure 6. Flowchart of the proposed method.
Remotesensing 15 03463 g006
Figure 7. Images of Scene 1. (a) Flight trajectory and selected area; (b) optical image; (c) CSAR image.
Figure 7. Images of Scene 1. (a) Flight trajectory and selected area; (b) optical image; (c) CSAR image.
Remotesensing 15 03463 g007
Figure 8. The positions of the selected pixels. (Pixel A represents non-building, pixel B represents building.).
Figure 8. The positions of the selected pixels. (Pixel A represents non-building, pixel B represents building.).
Remotesensing 15 03463 g008
Figure 9. Amplitude and membership curves of the selected pixels. (a) Amplitude; (b) membership.
Figure 9. Amplitude and membership curves of the selected pixels. (a) Amplitude; (b) membership.
Remotesensing 15 03463 g009
Figure 10. Extraction result of strong scattering feature.
Figure 10. Extraction result of strong scattering feature.
Remotesensing 15 03463 g010
Figure 11. Filtered aspect entropy and extraction result. (a) Aspect entropy feature; (b) anisotropy extraction result.
Figure 11. Filtered aspect entropy and extraction result. (a) Aspect entropy feature; (b) anisotropy extraction result.
Remotesensing 15 03463 g011
Figure 12. Comparison of gray values before and after denoising of different pixels. (a) The building pixel; (b) non-building pixel.
Figure 12. Comparison of gray values before and after denoising of different pixels. (a) The building pixel; (b) non-building pixel.
Remotesensing 15 03463 g012
Figure 13. Images of different sub-aperture images. (a) The 8th sub-aperture; (b) the 10th sub-aperture; (c) the 18th sub-aperture.
Figure 13. Images of different sub-aperture images. (a) The 8th sub-aperture; (b) the 10th sub-aperture; (c) the 18th sub-aperture.
Remotesensing 15 03463 g013
Figure 14. Extraction results of Scene 1 by different methods. (a) The proposed method; (b) the method in [19]; (c) the method in [20]; (d) the method in [22].
Figure 14. Extraction results of Scene 1 by different methods. (a) The proposed method; (b) the method in [19]; (c) the method in [20]; (d) the method in [22].
Remotesensing 15 03463 g014
Figure 15. Images of different sub-aperture images. (a) Optical image; (b) 14th sub-aperture; (c) 54th sub-aperture.
Figure 15. Images of different sub-aperture images. (a) Optical image; (b) 14th sub-aperture; (c) 54th sub-aperture.
Remotesensing 15 03463 g015
Figure 16. Building areas extracted by different methods. (a) The proposed method; (b) the method in [19]; (c) the method in [20]; (d) the method in [22]; (e) ground truth.
Figure 16. Building areas extracted by different methods. (a) The proposed method; (b) the method in [19]; (c) the method in [20]; (d) the method in [22]; (e) ground truth.
Remotesensing 15 03463 g016
Figure 17. Building geometry information in the result. (B1–B6 represent the building number.).
Figure 17. Building geometry information in the result. (B1–B6 represent the building number.).
Remotesensing 15 03463 g017
Figure 18. Ku-band extraction results in Scene 2. (a) Optical image 1 (Google map); (b) optical image 2 (UAV aerial photography); (c) CSAR image; (d) strong scattering extraction result; (e) anisotropic extraction result; (f) final result; (g) the result in [19]; (h) the result in [20]; (i) the result in [22].
Figure 18. Ku-band extraction results in Scene 2. (a) Optical image 1 (Google map); (b) optical image 2 (UAV aerial photography); (c) CSAR image; (d) strong scattering extraction result; (e) anisotropic extraction result; (f) final result; (g) the result in [19]; (h) the result in [20]; (i) the result in [22].
Remotesensing 15 03463 g018
Figure 19. Building geometry information in the result. (L1 and L2 represents the length, W1 and W2 represents the width.).
Figure 19. Building geometry information in the result. (L1 and L2 represents the length, W1 and W2 represents the width.).
Remotesensing 15 03463 g019
Figure 20. L-band extraction results in Scene 3. (a) Optical image; (b) CSAR image; (c) strong scattering extraction result; (d) anisotropic extraction result; (e) final result; (f) the result in [19]; (g) the result in [20]; (h) the result in [22].
Figure 20. L-band extraction results in Scene 3. (a) Optical image; (b) CSAR image; (c) strong scattering extraction result; (d) anisotropic extraction result; (e) final result; (f) the result in [19]; (g) the result in [20]; (h) the result in [22].
Remotesensing 15 03463 g020
Table 1. Region extraction accuracy of Scene 1.
Table 1. Region extraction accuracy of Scene 1.
MethodDR (%)FAR (%)AC (%)
The method in [19]94.5755.8395.78
The method in [20]91.2762.7294.52
The method in [22]96.4958.9195.21
The proposed method92.4128.1398.52
Table 2. Extraction results of building information.
Table 2. Extraction results of building information.
BuildingsInformationTrue Value (m)Measured (m)Relative Error (%)
B1length46452.17
width1514.53.33
B2length3935.58.97
width16156.25
B3length3330.57.58
width16160
B4length6259.54.03
width23218.7
B5length7374.061.45
width2019.33.5
B6length8588.444.05
width3337.6514.09
Table 3. Region extraction accuracy of Scene 2.
Table 3. Region extraction accuracy of Scene 2.
MethodDR (%)FAR (%)AC (%)
The method in [19]99.9725.2287.88
The method in [20]99.9926.3687.14
The method in [22]99.7932.4382.72
The proposed method99.369.4896.03
Table 4. Extraction results of building information.
Table 4. Extraction results of building information.
InformationTrue Value (m)Measured (m)Relative Error (%)
L110398.14.76
W14340.874.95
L2223220.451.14
W29996.192.84
Table 5. Region extraction accuracy of Scene 3.
Table 5. Region extraction accuracy of Scene 3.
MethodDR (%)FAR (%)AC (%)
The method in [19]93.1347.8282.98
The method in [20]90.3448.4082.59
The method in [22]92.2247.3983.24
The proposed method93.1727.3692.26
Table 6. Comparison of computational complexity.
Table 6. Comparison of computational complexity.
MethodStrong Scattering Feature Anisotropic Scattering Feature
The method in [22] O K × N × w 2 + K × N × c × t F C M O K × N × w 2 × t E M
The proposed method O K × N × w 2 + K × q × c × t F C M O K × N
Where K is the number of sub-apertures, N is the number of pixels in each sub-aperture image, w is the size of sliding window, q is the histogram gray levels, and q N , c is the number of clusters, t F C M is the number of iterations of FCM, and t E M is the number of iterations of EM.
Table 7. Comparison of calculation time. (Unit: s).
Table 7. Comparison of calculation time. (Unit: s).
MethodScene 1Scene 2Scene 3
The method in [22]14481747176
The proposed method8912729
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhao, J.; An, D.; Chen, L. A Novel Method for Building Contour Extraction Based on CSAR Images. Remote Sens. 2023, 15, 3463. https://doi.org/10.3390/rs15143463

AMA Style

Zhao J, An D, Chen L. A Novel Method for Building Contour Extraction Based on CSAR Images. Remote Sensing. 2023; 15(14):3463. https://doi.org/10.3390/rs15143463

Chicago/Turabian Style

Zhao, Jia, Daoxiang An, and Leping Chen. 2023. "A Novel Method for Building Contour Extraction Based on CSAR Images" Remote Sensing 15, no. 14: 3463. https://doi.org/10.3390/rs15143463

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop