Next Article in Journal
A Machine Learning Method for Predicting Vegetation Indices in China
Previous Article in Journal
Assessing the Effect of Drought on Winter Wheat Growth Using Unmanned Aerial System (UAS)-Based Phenotyping
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Building Damage Detection Based on OPCE Matching Algorithm Using a Single Post-Event PolSAR Data

1
Institute of Remote Sensing and Geographical Information System, School of Earth and Space Sciences, Peking University, Beijing 100871, China
2
National Satellite Meteorological Center, China Meteorological Administration, Beijing 100081, China
3
Air Force Research Institute, Beijing 100085, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(6), 1146; https://doi.org/10.3390/rs13061146
Submission received: 10 February 2021 / Revised: 27 February 2021 / Accepted: 1 March 2021 / Published: 17 March 2021

Abstract

:
Synthetic aperture radar (SAR) is an effective tool in detecting building damage. At present, more and more studies detect building damage using a single post-event fully polarimetric SAR (PolSAR) image, because it permits faster and more convenient damage detection work. However, the existence of non-buildings and obliquely-oriented buildings in disaster areas presents a challenge for obtaining accurate detection results using only post-event PolSAR data. To solve these problems, a new method is proposed in this work to detect completely collapsed buildings using a single post-event full polarization SAR image. The proposed method makes two improvements to building damage detection. First, it provides a more effective solution for non-building area removal in post-event PolSAR images. By selecting and combining three competitive polarization features, the proposed solution can remove most non-building areas effectively, including mountain vegetation and farmland areas, which are easily confused with collapsed buildings. Second, it significantly improves the classification performance of collapsed and standing buildings. A new polarization feature was created specifically for the classification of obliquely-oriented and collapsed buildings via development of the optimization of polarimetric contrast enhancement (OPCE) matching algorithm. Using this developed feature combined with texture features, the proposed method effectively distinguished collapsed and obliquely-oriented buildings, while simultaneously also identifying the affected collapsed buildings in error-prone areas. Experiments were implemented on three PolSAR datasets obtained in fully polarimetric mode: Radarsat-2 PolSAR data from the 2010 Yushu earthquake in China (resolution: 12 m, scale of the study area: 50   km 2 ); ALOS PALSAR PolSAR data from the 2011 Tohoku tsunami in Japan (resolution: 23.14 m, scale of the study area: 113   km 2 ); and ALOS-2 PolSAR data from the 2016 Kumamoto earthquake in Japan (resolution: 5.1 m, scale of the study area: 5   km 2 ). Through the experiments, the proposed method was proven to obtain more than 90% accuracy for built-up area extraction in post-event PolSAR data. The achieved detection accuracies of building damage were 82.3%, 97.4%, and 78.5% in Yushu, Ishinomaki, and Mashiki town study sites, respectively.

1. Introduction

Destructive earthquakes and tsunamis often lead to serious casualties and to the loss of property [1]. After these disasters, fast and effective disaster monitoring and damage detection are essential to reduce casualties and loss [2]. Building damage detection, which directly relates to human life and economic losses, is crucial to emergency rescue [3]. Ground surveying provides the most accurate results for building damage detection, but it is time-consuming and dangerous. Alternatively, remote sensing is an excellent tool for building damage detection because it can provide a quick response and allows monitoring of large areas after the disaster [4].
Many remote sensing technologies are used in building damage detection after disasters, such as optical, light detection and ranging (LiDAR), and synthetic aperture radar (SAR) [5]. LiDAR can obtain three-dimensional information of disaster areas and is a useful tool for building damage detection [6]. However, the LiDAR dataset is not always available [5]. Optical images provide an intuitive view of the observed area and are easy to interpret. Various optical-based studies for building damage detection have been proposed. The related studies vary from the methods based on multi-temporal optical images [7] to the methods based on single-temporal optical image [8], from the methods based on a single optical platform to the methods based on multiple optical platforms [9], from the methods based on pixels [7] to the methods based on objects [8], from the methods using machine learning [10] to the methods utilizing deep learning [11]. Optical-based methods have been studied widely and can obtain accurate detection results of building damage. However, optical remote sensing greatly depends on sun illumination for imaging and is easily affected by atmospheric conditions, such as cloud coverage, which limits its application as an emergency tool directly following a disaster [5,12].
As an active remote sensing technology, SAR can work during day and night, and in poor weather conditions. Due to these advantages, SAR is more suitable for emergency rescue immediately following a disaster [12]. Many SAR-based methods for building damage detection have been proposed. Among these, the change detection-based method using both pre- and post-event SAR images is the most widely studied. According to the information used to construct the indicators of change detection, these studies can be classified into intensity change detection [13,14], coherence change detection [15,16] and polarimetry-based change detection methods [17,18,19]. Because it is more difficult to obtain both pre- and post-event fully polarimetric SAR (PolSAR) images, change detection methods based on intensity and coherence information are studied more widely than polarimetry-based methods. Furthermore, these two parameters are sometimes combined to conduct building damage detection [20,21]. Due to the development of high-spatial-resolution (HR) and very-high-spatial-resolution (VHR) SAR images, an increasing number of change detection methods have been proposed to detect building damage at the individual building level [22,23,24,25,26]. In addition, deep learning-based methods for building change detection have been proposed using VHR SAR images [25,27]. Change detection-based methods for building damage detection have been studied adequately and used in many cases of emergency observation. However, suitable pre-event SAR images are not always available, and the collection of pre-event SAR images is time-consuming. To detect building damage more quickly and conveniently, including in the absence of pre-event SAR images, developing methods that use only post-event SAR images is important and necessary.
PolSAR makes it possible to detect building damage accurately using only post-event SAR images because it can acquire abundant scattering information of the target. Several research works have been presented for detecting building damage using a single post-event PolSAR image. In 2009, Guo et al. [28] proved that the circular polarization correlation coefficient ρ , the anisotropic A , and the double-bounce scattering component of the Yamaguchi four-component scattering model exhibited a high correlation with collapsed buildings. They used these three features and the maximum likelihood classifier to map the distribution of collapsed buildings. Their work showed that building damage assessment using only post-event PolSAR images is both possible and effective. However, because the work did not first remove non-building areas, many non-building pixels were misclassified as collapsed buildings, which significantly influenced the detection accuracy of actual collapsed buildings. Therefore, in 2012, Li et al. [29] used entropy H and the average scattering mechanism α to first remove bare soil, and then used the circular polarization correlation coefficient ρ to extract collapsed buildings. The detection accuracy of collapsed buildings was clearly improved due to the removal of non-building areas, which proved the importance of the process of non-building area removal. In 2013, Zhao et al. [30] improved the work of Li et al. They used the H α w i s h a r t classification method to remove non-building areas, and used the normalized circular polarization correlation coefficient (NCCC) and the homogeneity (Hom) texture feature together to detect collapsed buildings. Shi et al. [31] and Sun et al. [32] used more texture features to detect building damage, and concluded that texture features were useful for classifying collapsed and standing buildings. Zhai et al. in 2019 [1] used the texture features of the PolSAR image after optimization of polarimetric contrast enhancement (OPCE) to detect building damage and also obtained reliable results. To combine the advantages of polarization features and texture features, multi-feature-based methods for building damage detection have been paid more attention to. For these methods, machine learning algorithms such as random forest (RF) and support vector machine (SVM) are usually chosen as the classifier. For instance, in Shi’s work [31] 40 polarimetric features, 138 texture features, and three interferometric features were stacked into a high-dimension feature cube and input into the RF classifier to conduct the building damage assessment. In 2017, Bai et al. [33] employed the support vector machine (SVM) classification algorithm to carry out a building damage assessment based on 91 features using post-event dual polarimetric SAR image. These works showed the effectiveness of machine learning algorithms for integrating multi-features to detect building damage. In addition to supervised methods, unsupervised methods were also proposed for building damage detection using a single post-event PolSAR image. For example, in 2018, Ji et al. [4] proposed an automatic threshold unsupervised method for building damage assessment using the circular polarization correlation coefficient ρ and the double-bounce scattering power parameters after polarization orientation angle (POA) compensation. However, deep learning algorithms have rarely been applied to this topic. This is mainly because it is difficult to obtain a large number of samples from post-event PolSAR images to train deep learning algorithms because the PolSAR data is more scarce than other SAR data, especially in the context of disaster relief.
The above-mentioned research highlights that it is important to perform building damage detection using only post-event PolSAR images. However, there are still some problems that need to be addressed to improve the accuracy of building damage detection. The first problem is that some non-building areas, especially mountain vegetation and farmland areas, cannot be easily distinguished from built-up areas, which thus causes overestimation of building damage. The second problem is that obliquely-oriented buildings, which have an undamaged structure but an orientation that is oblique to the satellite flight path, are usually confused with collapsed buildings. This problem significantly influences the accuracy of building damage detection. Moreover, highly damaged urban areas with a small number of typical standing buildings, which have an undamaged structure and an orientation parallel to or perpendicular to the satellite flight path, can be easily identified as slightly damaged areas because the typical standing buildings influence the scattering characteristics of these areas.
To solve these problems and improve detection accuracy, in this research, we propose a new method for building damage detection using a single post-event PolSAR image. The proposed method adopts a two-step classification strategy. In the first step, through the analyses, more competitive classification features were selected and a new built-up area extraction method was developed to address the misclassification problem between non-building areas and collapsed buildings. In the second step, a new polarization feature was created by developing the OPCE matching algorithm to specifically address the classification problem between obliquely-oriented and collapsed buildings. A new multi-feature-based classification method was then developed by combining the created feature and eight gray level co-occurrence matrix (GLCM) texture features to simultaneously address the misidentification problem of some seriously damaged urban areas. In this study, a damaged or collapsed building means buildings that have completely collapsed following a disaster. The experiments were carried out on three PolSAR datasets: Radarsat-2 PolSAR data in Yushu county after the 2010 Yushu earthquake (resolution: 12 m, scale of the study area: 50 km 2 ); ALOS PALSAR PolSAR data (abbreviated to ALOS-1 PolSAR data hereafter) in Ishinomaki city after the 2011 Tohoku tsunami (resolution: 23.14 m, scale of the study area: 113 km 2 ); and ALOS-2 PolSAR data in Mashiki town after the 2016 Kumamoto earthquake (resolution: 5.1 m, scale of the study area: 5 km 2 ). These PolSAR data were obtained in fully polarimetric mode. Due to the scattering reciprocity of the monostatic backscattering, we used the information of HH, VV, and HV components of these PolSAR data in this work. The experimental results show that the proposed method can well remove non-building areas in post-event PolSAR data, and effectively reduce the misclassification between obliquely-oriented buildings and collapsed buildings. In addition, it can simultaneously ameliorate the underestimation of building damage in particular areas subject to significant damage.

2. Study Areas and Data Sets

In this paper, to adequately analyze the performance of features and evaluate the applicability of the proposed method, three study sites were chosen for analyses and experiments. These study sites are Yushu County in China, Ishinomaki City in Japan, and Mashiki town in the Kumamoto area of Japan. The detailed information of the three study sites and the parameters of data sets are shown below. Reference maps, which were produced by interpreting optical images of Google Earth and referencing the report of the field survey, are also shown below. Due to the resolution of the PolSAR data, it is difficult to assess damage extent at the single building level. Therefore, these reference maps are at block or grid levels.

2.1. Yushu County in China

On 14 April 2010, an earthquake with a magnitude of 7.1 struck Yushu County in Qinghai Province, China. The location of the epicenter was at 33.1 °N and 96.7 °E. The earthquake caused the deaths of more than 2690 people and a large number of buildings collapsed. On 21 April 2010, one week after the Yushu earthquake, Radarsat-2 satellites acquired post-earthquake PolSAR data of Yushu County. The PolSAR data was obtained in fully polarimetric mode. The coverage of this PolSAR data is shown by the red rectangle in Figure 1a. We chose the urban area of Yushu County as one of our study sites (yellow rectangle in Figure 1a). The PolSAR data had an azimuth spatial resolution of approximately 8 m, a range spatial resolution of approximately 12 m, and an angle of incidence of approximately 21°. To ensure the azimuth and range pixels were of a comparable size, three-look multi-look processing was first conducted on the PolSAR data. Figure 1b shows the Pauli RGB image of the Radarsat-2 PolSAR data after multi-look processing.
To provide a reference for verification of the accuracy, a block-level reference map of building damage in urban area of Yushu County was produced, as shown in Figure 2. This was interpreted according to the related reference maps [34,35] and the 0.5 m high-resolution optical image acquired on 6 May 2010. The division of blocks refers to the similarity of the building damage and the road network information. In the reference map, all blocks were interpreted as three damage levels: slight damage (less than one-third of buildings collapsed in this block); serious damage (more than half of the buildings collapsed in this block); and moderate damage (more than one-third but less than half of the buildings collapsed in this block).

2.2. Ishinomaki City in Japan

On 11 March 2011, a strong earthquake occurred in the Pacific Ocean in northeastern Japan and caused a large tsunami. The earthquake and tsunami caused devastating damage to Iwate, Miyagi, and Fukushima in northeastern Japan. One month after the earthquake, on 8 April 2011, the ALOS PALSAR sensor acquired PolSAR data of Ishinomaki city, Miyagi Prefecture, Japan. The PolSAR data was obtained in fully polarimetric mode. The azimuth and range resolution of the data were 4.45 m and 23.14 m, respectively, and the incident angle was approximately 23.83°. To ensure the azimuth and range pixel sizes were comparable, we performed eight-look multi-look processing on the PolSAR data. Figure 3 shows the Pauli RGB image of the PolSAR data after the multi-look processing, and the main study area, namely, the coastal area of Ishinomaki city, is shown in the red box in Figure 3.
For the Ishinomaki study site, we also produced a block-level reference map of building damage. The urban area was first divided into 59 blocks according to the road network and the similarity of building damage. Then, based on the ground-truth map interpreted by Tohoku University and The University of Tokyo (Figure 4) [36], we counted the pixels of the “washed away” and the “surviving” categories for each block. Thus, the preliminary block-level reference map was obtained: for one block, if the “washed away” pixels were less than 30% of the sum of “washed away” and “surviving” pixels, it was interpreted as slight damage; if the “washed away” pixels were more than 50% of the sum of “washed away” and “surviving” pixels, it was interpreted as serious damage; others were interpreted as moderate damage. Finally, referring to the reference maps of Ishinomaki city from other papers [4,35], we adjusted the preliminary result to remove mistakes and obtain the final reference map, as shown in Figure 5.

2.3. Mashiki Town in the Kumamoto Area of Japan

In April, 2016, a series of earthquakes occurred in Kumamoto, Kyushu Island, Japan. The foreshock (epicenter was at 32.73 °N and 130.80 °E) occurred on April 14th with a magnitude of 6.2, and the main shock (epicenter was at 32.75 °N and 130.79 °E) occurred on April 16th with a magnitude of 7.0. Mashiki town in the Kumamoto area was one of the areas most seriously affected by intensive ground-shaking, with more than 7000 buildings damaged [19]. Five days after the main shock, on 21 April 2016, the ALOS-2 satellites acquired PolSAR data of Mashiki town. The PolSAR data was obtained in fully polarimetric mode. The nominal azimuth and ground-range resolution of the data were 4.3 m and 5.1 m, respectively, and the incident angle was approximately 30.8°. Figure 6 shows the Pauli RGB image of the PolSAR data in Mashiki town.
After the 2016 Kumamoto earthquake, the Architectural Institute of Japan carried out a field survey. They investigated the damage situation of buildings and classified them according to Okada’s damage level [37]. Based on the investigation, they produced a series of grid-level damage maps with a grid size of 57 m × 57 m. These damage maps were included in the quick report of the field survey on the building damage by the 2016 Kumamoto earthquake [38], which is from the website of National Institute for Land Infrastructure Management (NILIM). The Figure 5.2-2 in the quick report [38] shows the five-grade grid-level collapsed rate (CR) map, where the CR was defined as the number of completely collapsed buildings relative to the total number of buildings in each grid cell. This Figure could be used as our reference map theoretically.
However, due to the limitation of the resolution, it is difficult to classify building damage into five grades accurately using space-borne PolSAR data. Therefore, we generated a three-grade grid-level CR map according to the five-grade CR map in the quick report [38]. Specifically, we merged the first two grades of the five-grade CR map as slight damage in the three-grade CR map, where CR 0   and   CR 25 % ; then, we merged the last two grades of the five-grade CR map as serious damage in the three-grade CR map, in which CR > 50 % ; the third grade in the five-grade CR map was retained as moderate damage ( CR > 25 %   and   CR 50 % ) in the three-grade CR map. This three-grade grid-level CR map was used as our reference map to evaluate the performance of the proposed method in the Mashiki study site, as shown in Figure 7.

3. Methods

The framework of the proposed method of building damage detection is shown in Figure 8. The proposed method uses a two-step classification strategy to detect building damage.
The first step is non-building area removal. In this part, a random forest (RF)-based non-building and built-up area classification method is proposed using three effective polarization features. After pre-processing, three polarization features are calculated, and the RF-based classification is conducted to obtain the binary classification result of non-building areas and built-up areas. With the mask processing, the built-up areas are retained for the following step. The details and analyses are outlined in Section 3.1.
The second step is classification of collapsed and standing buildings. In this part, based on the pre-processed PolSAR data, a new feature—the maximal power contrast ( MaxC ) feature—is calculated using the proposed OPCE matching algorithm. In addition, eight GLCM texture features are calculated. These nine features are input into the RF classifier, and the pixels located in built-up areas obtained in the first step are classified into collapsed and standing buildings to obtain the building damage detection result. The details and analyses are displayed in Section 3.2.

3.1. Non-Building Area Removal

The classification of non-building areas and built-up areas is important for accurately identifying collapsed buildings because it can effectively reduce the possibility that non-building areas are misclassified as collapsed buildings. Previous research used entropy H and the average scattering mechanism α , or surface scattering of the Yamaguchi four-component decomposition with the rotation [39] to classify non-building areas and built-up areas [1,29,30]. However, these features are only effective for removing a part of non-building areas, and some non-building areas, such as mountain vegetation and farmland areas, cannot be easily distinguished from built-up areas. Therefore, in this section, we selected more effective classification features and developed a new classification method for non-building area removal.

3.1.1. The Selection of Classification Features

In our previous work [34], we found that the π/4 double-bounce scattering component of the Pauli decomposition (abbreviated as the Pauli π/4 feature hereafter) had good ability to separate non-building areas from built-up areas, as shown in Figure 9. Furthermore, using only the Pauli π/4 feature to classify non-building and built-up areas resulted in 89.63% overall accuracy and 96% detection rates of built-up areas in the Yushu study site [34]. These results show the potential of the Pauli π/4 feature for non-building area removal.
However, because this previous work was only conducted in the Yushu study site and there was almost no vegetation in the PolSAR data of the Yushu study site, the results only indicated that the Pauli π/4 feature had a good ability to distinguish built-up areas from non-vegetated non-building areas, such as water (river), roads, and bare soil. Whether the Pauli π/4 feature is also suitable for distinguishing built-up areas from vegetation areas needs more exploration.
Therefore, in this study, another two study sites—Ishinomaki study site after the 2011 Tohoku earthquake and Mashiki town study site after the 2016 Kumamoto earthquake— where more abundant non-building types exist, were also introduced. Based on these new study sites, we further explored the ability of the Pauli π/4 feature.
The Pauli π/4 feature is one of the components of Pauli decomposition. For PolSAR data, when applying the Pauli decomposition, the scattering matrix S can be expressed as [40]:
S = S H H S H V S V H S V V = a 2 1 0 0 1 + b 2 1 0 0 1 + c 2 0 1 1 0 + d 2 0 j j 0 ,
where each basis matrix on the right side of the equal sign corresponds to an elementary scattering mechanism, and a ,   b ,   c ,   d are given by:
a = S H H + S V V 2 ,   b = S H H S V V 2 ,   c = S H V + S V H 2 ,   d = j S H V S V H 2 .
Because the third basis matrix in Equation (1) is associated with diplane scattering (double- or even-bounce scattering) from corners with a relative orientation of π/4, the complex c in Equation (2) is defined as the π/4 double-bounce scattering component of the Pauli decomposition [40] (the Pauli π/4 feature). According to Equation (2), it can be determined that the Pauli π/4 feature is mainly associated with the cross-polarization scattering. The power P c of the Pauli π/4 feature can be expressed as:
P c = 10 lg c 2 .
In the Ishinomaki and Mashiki town study sites, the non-building areas mainly included water, roads, bare soil, mountain vegetation, and farmland. To further analyze the ability of the Pauli π/4 feature, we chose samples for built-up areas and each kind of non-building area in the two study sites, and drew the probability density function (pdf) of these samples in the Pauli π/4 feature. These samples were selected by visually interpreting the Google Earth images, as shown in Figure 10. The pdfs in the two study sites are shown in Figure 11 (note that ‘built-up area’ samples include both standing building and collapsed building samples).
In Figure 11, the greater the overlap of the pdfs, the more difficult it is to classify these kinds of objects using current features, and vice versa [41]. The pdfs of water, road, and bare soil samples are clearly separated from the pdf of built-up area samples, which again proves that the Pauli π/4 feature has a good ability to separate built-up areas from non-vegetated non-buildings areas. However, there is some overlap between the pdf of farmland samples and that of built-up area samples. In addition, the pdf of mountain vegetation samples almost completely overlaps with the pdf of built-up area samples. These indicate that the Pauli π/4 feature has a limitation in distinguishing built-up areas from vegetated non-building areas.
Therefore, to develop a non-building removal method with good performance in both low vegetation and abundant vegetation areas, adding features that are sensitive to vegetation areas is necessary.
The radar vegetation index (RVI) is sensitive to vegetation areas, and has been used for the recognition of vegetation in numerous studies. In this study, we introduced the RVI to help address the problem outlined above. The RVI is a polarization parameter that can measure the randomness of scattering and can reflect the health of vegetation. It can be expressed as [42]:
RVI = 4 λ 3 λ 1 + λ 2 + λ 3 ,   0 RVI 4 3 ,
where λ i ,   i = 1 ,   2 ,   3 are the eigenvalues of the Cloude–Pottier decomposition [43].
The pdfs of mountain vegetation, farmland, and built-up area samples in the RVI are shown in Figure 12. It could be seen that with the RVI feature the mountain vegetation and built-up areas can be effectively separated. However, the pdf of farmland samples still severely overlaps with the pdf of built-up areas samples. Therefore, the introduction of the RVI can well solve the problem of misclassification between mountain vegetation and built-up areas, but cannot provide effective help for distinguishing farmland areas from built-up areas.
To further address the problem of the removal of farmland areas, we introduced the intensity component of the Shannon entropy (SEI) feature. Shannon entropy (SE) was introduced by Morio et al. [44,45] as a sum of two contributions, SEI and SEP. SEI is the intensity contribution that depends on the total backscattered power, and is given by [40]:
SE I = 3 log π e Tr T 3 3 ,
where T 3 is the coherency matrix, and Tr is the trace operator of the matrix.
We drew the pdfs of the SEI of the farmland and built-up area samples in both Ishinomaki and Mashiki town study sites, as shown in Figure 13. Regarding the SEI feature, it can be noted that the pdf of farmland samples is separate from the pdf of built-up area samples, which indicates that the SEI can be well used to separate farmland areas from built-up areas.

3.1.2. Non-Building Area Removal Procedure

According to the above analysis, it can be noted that the Pauli π/4 feature, RVI feature, and SEI feature are sensitive to different types of non-building areas. If these features can be used together in a suitable way, most kinds of non-building areas could be accurately removed. In this study, we used the RF classifier to combine these three features and perform the classification of non-building areas and built-up areas. RF is a supervised ensemble learning classification algorithm that is constructed from a series of decision trees that are generated based on random subsamples of training data and random subsets of input features [46]. RF is highly robust and can effectively suppress overfitting caused by noise and erroneous samples [47]. In this study, using the RF classifier, non-building area removal can be conducted with the following three steps.
First, pre-processing is applied to the original post-event PolSAR data and the Pauli π/4 feature, the RVI feature, and the SEI feature are calculated.
Then, the Pauli π/4 feature, the RVI feature, and the SEI feature are input into the RF classifier and the pixels in each study site are classified into six classes, namely, water, roads, bare soil, mountain vegetation, farmlands, and built-up areas with the training samples.
Based on the RF classification results, a class-merging process is then implemented to obtain a binary classification result of non-building and built-up areas. Specifically, the classes of water, roads, bare soil, mountain vegetation, and farmlands are merged into the category of non-building area, and built-up areas are regarded as the category of built-up area. Next, using mask processing, the category of non-building area in the binary classification result is removed, and the category of built-up area is retained. The retained built-up area is then used in the following step.

3.2. Collapsed and Standing Building Classification

After removing the non-building areas, the most important task in building damage detection is the classification of collapsed and standing buildings. Due to the limitations of the resolution of space-borne PolSAR data, in this study, a collapsed building refers to those buildings whose structure is completely damaged or missing after a disaster, as shown in Figure 14a. A standing building refers to buildings that retain their structure and remain standing after a disaster. Furthermore, standing buildings can be divided into orthogonally-oriented standing buildings (orthogonally-oriented building) and obliquely-oriented standing buildings (obliquely-oriented building) according to the arrangement of the building. The former refers to buildings whose structure remains standing after a disaster and whose orientation is approximately parallel to or perpendicular to the satellite flight path, as shown in Figure 14b; and the latter refers to the buildings whose structure remains standing after a disaster but whose orientation is at an angle to the satellite flight path, as shown in Figure 14c.
For the classification of collapsed and standing buildings, two problems influence the accuracy. The first is the misclassification between obliquely-oriented buildings and collapsed buildings. In previous research, features such as the circular polarization correlation coefficient ρ , double-bounce scattering component of the Yamaguchi four-component decomposition with the rotation [39] P d , and the total power (Span), were proven to have the ability of distinguishing standing buildings from collapsed buildings. However, these features can usually only distinguish orthogonally-oriented buildings from collapsed buildings. For the classification of obliquely-oriented buildings and collapsed buildings, they do not perform well. To illustrate this, we took Yushu study site as an example and drew the pdfs of orthogonally-oriented building, obliquely-oriented building, and collapsed building samples in these features. The samples used for drawing the pdfs are shown in Figure 20a. The pdfs are shown in Figure 15, Figure 16 and Figure 17. It can be seen that the pdfs of orthogonally-oriented building samples are separated from the pdfs of collapsed building samples in these features, whereas the pdfs of obliquely-oriented building samples overlap with the pdfs of collapsed building samples. To address this problem, it is necessary to construct a new feature that can not only effectively distinguish orthogonally-oriented buildings from collapsed buildings, but also between obliquely-oriented buildings and collapsed buildings. Therefore, we developed the OPCE matching algorithm and generated a new polarization feature, called the MaxC feature, which can effectively distinguish collapsed buildings from both obliquely-oriented and orthogonally-oriented buildings. Details are provided in Section 3.2.1.
The second problem for the classification of collapsed and standing buildings is the misclassification of collapsed buildings in “special” seriously damaged areas. These special seriously damaged areas usually contain a few typical orthogonally-oriented buildings whose structure is retained after the disaster and arrangement direction is almost completely parallel to or perpendicular to the satellite flight path, as shown by the red circles in Figure 18. These typical orthogonally-oriented buildings usually have a strong double-bounce scattering characteristic and affect the sca1ttering characteristic of the surrounding pixels, in turn affecting the surrounding collapsed buildings and causing them to be easily identified as standing buildings. To address this problem, we introduced texture features to add information regarding the spatial distribution to the classification. To combine the polarization and texture features, the RF classification algorithm was used, and a multi-feature-based classification method was developed. The details are provided in Section 3.2.2.

3.2.1. The OPCE Matching Algorithm and the Feature MaxC

For target detection using PolSAR images, OPCE is an effective method to discriminate the desired target from the background using the power image [48]. The traditional OPCE algorithm aims to choose the optimal polarization states to enhance the power ratio between the desired target and the background clutter [49,50,51,52,53].
Let P T a r g e t and P C l u t t e r denote the received power of the desired target samples and the background clutter samples, respectively, K T a r g e t and K C l u t t e r denote the Kennaugh matrix of the desired target samples and the background clutter samples, respectively, and g = 1   g 1   g 2   g 3 T and h = 1   h 1   h 2   h 3 T indicate the Stokes vectors of the transmitting and receiving polarization states, respectively, of the radar antennas; then, the traditional OPCE algorithm can be expressed as:
Maximize P T a r g e t P C l u t t e r = maximize h T K T a r g e t g h T K C l u t t e r g , subject   to   g 1 2 + g 2 2 + g 3 2 = 1 h 1 2 + h 2 2 + h 3 2 = 1 ,
where T denotes the matrix transpose [53,54].
Using the traditional OPCE algorithm, we can obtain a pair of optimal polarization states, g m and h m . Theoretically, by applying g m and h m to each pixel in the PolSAR image, the power of the target is enhanced, and the power of the background is weakened. The traditional OPCE algorithm has good performance in ship detection research [55]; however, in the classification of collapsed and standing buildings, the traditional OPCE algorithm does not perform well, as our previous work shows [56]. We think the main reason for this is that the environment of urban areas is more complicated than that of the sea. In urban areas, standing buildings that are regarded as background clutter in the task of collapsed building detection usually have different shapes and arrangement directions, which results in different scattering characteristics. Therefore, it is difficult to identify a pair of the optimal polarization states g m and h m that are suitable for all pixels in urban areas.
To solve this problem, we imported the idea of template matching to the traditional OPCE algorithm and proposed the OPCE matching algorithm. In the OPCE matching algorithm, the desired target samples are first selected as the target template. Then, the maximal power contrast between each pixel and the target template is calculated using the traditional OPCE algorithm. In this way, each pixel obtains a new feature value, which indicates the maximum of the contrast between this pixel’s power and the target template’s power. In this study, we set collapsed buildings as the desired target sample. Algorithm 1 summarizes the OPCE matching algorithm.
Algorithm 1 OPCE matching algorithm
Input:
   PolSAR image L and target sample set C B
Output:
   Feature M a x C
 1: K C B ¯   G e t A v a r a g e K e n n a u g h M a t r i x C B
 2: for i 0 to M do
 3: for     j 0 to N do
 4:        K x i , j     G e t K e n n a u g h M a t r i x x i , j
 5:        M a x C x _ C B i , j   O P C E K x i , j ,     K C B ¯
 6:        M a x C C B _ x i , j   O P C E K C B ¯ ,     K x i , j
 7:       M a x C i , j   G e t M a x M a x C x C B i , j ,     M a x C C B _ x i , j
 8:     end for
 9: end for
 10: return M a x C
In Algorithm 1, C B denotes the sample set of the collapsed buildings, and K C B ¯ denote the average Kennaugh matrix of the sample set C B . For a given PolSAR image L , M denotes the rows of the image, N denotes the columns of the image, i indicates the row number, j indicates the column number, x i , j represents an arbitrary pixel, K x i , j corresponds to the Kennaugh matrix of x i , j . M a x C x _ C B denotes the maximal power contrast between pixel x i , j and the target template, as shown in Equation (7), and M a x C C B _ x denotes the maximal power contrast between the target template and pixel x i , j , as shown in Equation (8). M a x C represents the final maximal power contrast which is also the output result of the OPCE matching algorithm.
M a x C x _ C B i , j = maximize h T K x i , j g h T K C B ¯ g , subject   to   g 1 2 + g 2 2 + g 3 2 = 1 h 1 2 + h 2 2 + h 3 2 = 1 ,
M a x C C B _ x i , j = maximize h T K C B ¯ g h T K x i , j g , subject   to   g 1 2 + g 2 2 + g 3 2 = 1 h 1 2 + h 2 2 + h 3 2 = 1 .
The schematic diagram of the OPCE matching algorithm is shown in Figure 19.
By implementing the OPCE matching algorithm, we can obtain a new feature image, i.e., the MaxC feature image. Theoretically, in this image, pixels belonging to the collapsed building category have a lower value of MaxC than pixels belonging to the orthogonally-oriented or obliquely-oriented standing building categories because similar objects usually have a smaller contrast.
We implemented the OPCE matching algorithm in Yushu, Ishinomaki, and Mashiki town study sites. For each study site, only a few collapsed building samples are required to implement the OPCE matching algorithm, as shown by the yellow rectangles in Figure 20a,c,e. The results of the MaxC features in the three study sites are shown in Figure 20b,d,f.
From Figure 20b,d,f, it can be seen that the collapsed buildings indeed have a lower value of MaxC than orthogonally-oriented or obliquely-oriented buildings, which is consistent with the theoretical analysis. In the same manner as for the analysis of features ρ , P d , and span, we also drew the pdfs of orthogonally-oriented buildings, obliquely-oriented buildings, and collapsed building samples for the feature MaxC in the Yushu study site, as shown in Figure 21. Compared with the pdfs of ρ , P d , and Span features in Figure 15, Figure 16 and Figure 17, in feature MaxC , orthogonally-oriented building and obliquely-oriented building pdfs can both be clearly separated from the collapsed building pdf.
To adequately verify the above, we also drew the pdfs of the feature MaxC for the other two study sites, as shown in Figure 22 and Figure 23. The samples used to draw Figure 22 and Figure 23 are shown in Figure 20c,e, respectively (collapsed building samples—red patches, orthogonally-oriented building samples—blue patches, obliquely-oriented building samples—green patches).
It can be observed that the M a x C feature also presents a good ability in separating collapsed buildings from both orthogonally-oriented and obliquely-oriented buildings in Ishinomaki and Mashiki town study sites. To quantitatively measure the ability of the feature MaxC to distinguish obliquely-oriented buildings from collapsed buildings, the Jeffreys–Matusita (J–M) distance was introduced to calculate the separability. J–M distance is a widely used index for selecting and comparing features, and can effectively evaluate the ability of a feature to recognize a target [57]. The value of J–M distance ranges from 0 to 2, and the higher the value, the stronger the distinguishability of two targets. J–M distance can be expressed as:
J = 2 1 e B , B = 1 8 m 1 m 2 2 2 δ 1 2 + δ 2 2 + 1 2 ln δ 1 2 + δ 2 2 2 δ 1 δ 2 ,
where J is the J–M distance of features; m i ,   i = 1 , 2 are the means of the feature value of different targets; δ i ,   i = 1 , 2 are the standard deviation of the feature value of different targets.
Table 1 shows the J–M distance between collapsed and obliquely-oriented buildings in ρ , P d , Span, and MaxC features in three study sites, respectively. The samples used for calculating the J–M distance are the same as the samples used for drawing pdfs. It can be observed that the J–M distance in feature MaxC is significantly higher than the J–M distance in other features, and is about five times that of other features. It further proves that the feature MaxC has a better ability for distinguishing collapsed buildings from obliquely-oriented buildings.

3.2.2. Multi-Feature-Based Collapsed and Standing Building Classification

In Section 3.2.1 we proved that the feature MaxC had a strong ability to distinguish collapsed buildings from obliquely-oriented buildings, which could be well used to solve the first problem of the classification between collapsed and standing buildings. Taking the Yushu study site as an example, we conducted classification of collapsed and standing buildings using only the feature MaxC . The result is shown in Figure 24a. It is notable that most of the collapsed buildings are correctly distinguished from standing buildings, and almost all obliquely-oriented buildings, which are highlighted by the blue rectangle in Figure 24a, and are correctly classified as standing buildings. These results all indicate the advantage of the feature MaxC .
However, we can also note that only using the feature MaxC is not enough. The areas highlighted by the yellow rectangles in Figure 24a were seriously damaged areas in reality. Nevertheless, the feature MaxC could not effectively identify the collapsed buildings in these areas. This is the second problem, as discussed at the beginning of Section 3.2. Specifically, it is the misclassification problem of the collapsed buildings in some special seriously damaged areas. The main reason for this problem is that typical orthogonally-oriented buildings affect the scattering characteristic of the collapsed buildings in these special areas. As a result, the affected collapsed buildings will have similar scattering characteristics as standing buildings, rather than normal collapsed buildings. Therefore, distinguishing between these affected collapsed buildings and standing buildings is a challenge when using polarization features. Many polarization features show a weakness in this aspect, as shown in Table 2. Therefore, only using polarization information to classify collapsed and standing buildings is not sufficient, and it is necessary to introduce other features and perform a multi-feature-based classification.
In this section, texture features are introduced to add spatial information to address this problem. GLCM [58] is a traditional and widely used method to extract texture features. Generally, there are eight second-order statistical GLCM texture features, namely, mean (Mean), variance (Var), homogeneity (Hom), contrast (Con), dissimilarity (Dis), entropy (Entr), second moment (SeM), and correlation (Cor). Many studies have used GLCM texture features to identify earthquake or tsunami-induced building damage, and have proved that these texture features have good performance in classifying collapsed and standing buildings [31,59,60]. In this study, the GLCM texture features were used to distinguish affected collapsed buildings from standing buildings. To quantitatively analyze the performance, we calculated the J–M distances of eight GLCM texture features between affected collapsed building samples and standing building samples. The results are shown in Table 3.
It is obvious that the J–M distance between affected collapsed buildings and standing buildings in GLCM texture features is larger than that in polarization features. For GLCM texture features, the J–M distances between affected collapsed buildings and standing buildings range from 0.46 to 0.68, which is around two to three times higher than that in polarization features. This shows that the GLCM texture features have the potential to solve the problem of the misclassification between affected collapsed buildings and standing buildings. To prove this, we used the MaxC feature and eight GLCM texture features together and conducted the classification again for the Yushu study site. The classification result of collapsed and standing buildings is shown in Figure 24b. Compared with Figure 24a, it can be observed that the misclassifications in the two yellow boxes were significantly corrected by introducing these texture features. In addition, the noise was reduced in the classification results as a result of combining multiple features.
In this study, the multi-feature-based classification method was carried out as follows: First, the MaxC feature and eight GLCM texture features were calculated based on the pre-processed PolSAR data. Then, these were stacked into a feature cube and input into the RF classifier. The classification was performed only for built-up areas, which were obtained by the non-building removal process outlined in Section 3.1.

4. Results

As mentioned in Section 3, the proposed method for building damage detection adopts a two-step classification strategy. The first step is the separation of non-building areas and built-up areas, through which we remove most of the interference from the non-building objects. In the second step, we focus on the classification of collapsed and standing buildings in built-up areas. For both two steps, we propose new features and new methods to conduct the corresponding classification. To clearly show the performance of each step of the proposed method, we analyzed the results of non-building area removal and the results of building damage detection separately. In the experiments, the RF classification algorithm was implemented using the “imageRF” module in the EnMAP-Box toolbox [61], where the number of decision trees was set to 100.

4.1. Results of Non-Building Area Removal

Figure 25c, Figure 26c and Figure 27c show the results of non-building area removal of the proposed method in Yushu, Ishinomaki, and Mashiki town study sites. We compared our results with the results of the H α method [29] and the H α w i s h a r t classification method [30]. The results of the H α method in three study sites are shown in Figure 25a, Figure 26a and Figure 27a, and the results of the H α w i s h a r t classification method are shown in Figure 25b, Figure 26b and Figure 27b. In all figures, the base map was the Pauli RGB image of the PolSAR data in the corresponding study site, and the blue areas show the retained built-up areas after removing non-building areas by each method.
From the figures, it is notable that the H α method results in a serious misclassification. Many non-building areas, especially mountain vegetation and farmland areas, could not be distinguished from the built-up areas by this method. The situation is worst in Mashiki town study sites because the urban area is completely surrounded by farmland in Mashiki town. Compared to the H α method, the H α w i s h a r t classification method results in less misclassification. Many mountain vegetation and farmland areas are removed by this method. However, roads and rivers could not be well removed, especially when we compare Figure 25b,c. In addition, some of the built-up areas are removed incorrectly using this method, which can be observed clearly at the coastal area of Ishinomaki city in Figure 26b.
Compared to the above two methods, the proposed method has the most reliable results. Not only can it correct the misclassification in mountain vegetation, farmland, road, and river areas, but it can also avoid affecting built-up areas. As shown in Figure 25c, Figure 26c and Figure 27c, most of the mountain vegetation areas and farmland areas are effectively removed, resulting in clear outlines of these urban areas. In addition, roads and rivers are effectively removed, as shown by the north–south wide river in the east of Yushu County in Figure 25c, the east–west narrow river in the north of Yushu County in Figure 25c, the northeast–southwest road in the Ishinomaki city in Figure 26c, and the east–west narrow road in Mashiki town in Figure 27c. In addition, the built-up areas in the three study sites are almost completely retained.
For further comparison, we conducted quantitative analysis for these methods. A confusion matrix was calculated based on test samples for each study site. To generate the test samples, we first selected primary samples for built-up area and each kind of non-building area by visually interpreting the Google Earth images. Then, we merged all kinds of non-building area primary samples as one primary sample set. Finally, we generated an equal number of random samples for built-up area and non-building areas from the built-up area primary sample and non-building area primary sample sets, and used these random samples as test samples. In Yushu, there were 300 non-building area test samples and 300 built-up area test samples; in Ishinomaki, there were 1000 non-building area test samples and 1000 built-up area test samples; and in Mashiki town, there were 500 non-building area test samples and 500 built-up area test samples.
The confusion matrix is shown in Table 4. As can be seen from the table, the overall accuracy (OA) of the proposed method was over 90% for the classification of built-up and non-building areas in all three study sites, and up to 98% in the Mashiki town study site. Compared with other two methods, the OA of built-up and non-building area classification improves by more than 10% using the proposed method. These results indicate the effectiveness of the proposed solution for built-up area extraction.
In addition, because our aim was to detect collapsed buildings in the following step, it was also important to evaluate the extent to which the non-building area removal method will misclassify collapsed building pixels as non-buildings. Therefore, we further calculated the error rate of the three methods. The error rate is defined as the probability that the test samples of collapsed building are misclassified as non-building areas. We generated 300, 500, and 400 collapsed building test samples for Yushu, Ishinomaki, and Mashiki town, respectively. The evaluation results are shown in Table 5, where it can be seen that the method proposed in this research has the least misclassification of collapsed building pixels.
Through the above analyses, it can be observed that the proposed method improves the overall accuracy of non-building area removal to over 90%, while also ensuring that the misclassification of collapsed buildings is less than 4%, which proves that the proposed method is more effective.

4.2. Results of Building Damage Detection

After removing the non-building areas, the pixels in built-up areas were classified to the collapsed and standing buildings categories using the multi-feature-based classification method mentioned in Section 3.2 to obtain the final result of building damage detection.
In the experiments, the step width for the calculation of GLCM texture features was set as 1 empirically. The window size and the direction for the calculation of GLCM texture features was set as 9 × 9 and 45° in the Yushu and Ishinomaki study sites, and was set as 5 × 5 and 90° in the Mashiki town study site according to the analyses in Section 5.1. The number of training samples for the classification in three study sites is shown in Table 6.
The detection results of three study sites are shown in Figure 28, where the red parts are the collapsed buildings, the green parts are the standing buildings, and the white parts are the non-building areas. It can be seen that the main collapsed areas, which are shown with yellow circles in Figure 28, are all correctly detected in the three study sites, even for the areas that contain the affected collapsed buildings (shown with pink rectangles in Figure 28a). The obliquely-oriented buildings, shown with the blue rectangles in Figure 28, are almost all correctly classified as standing buildings in the three study sites. These results demonstrate the ability of the proposed method to distinguish between collapsed and standing buildings.
To quantitatively evaluate the performance of the proposed method, the detection results were compared with reference maps. To unify the form of the detection results and the reference maps, we converted the detection results in Yushu and Ishinomaki study sites into block-level damage maps, and converted the detection results in Mashiki town study site into a grid-level damage map. To implement the conversion, the building block collapse rate (BBCR) was calculated [30,35]. For each block or grid, the BBCR was defined as the ratio between the number of pixels of collapsed buildings and the total number of pixels of buildings, as per the following formula:
BBCR j = T C B j T C B j + T S B j ,
where BBCR j denotes the BBCR of the j th building block or grid, T C B j denotes the number of collapsed buildings in the j th building block or grid, and T S B j indicates the number of standing buildings in the j th building block or grid.
After calculating the BBCR, the blocks or grids were divided into three damage levels: slight damage, moderate damage, and serious damage according to their BBCR value, which is similar to the generation of reference maps. Specifically, the block-level damage maps in Yushu and Ishinomaki study sites were generated using Formula (11) and the grid-level damage map in Mashiki town study site was generated using Formula (12).
if   BBCR j 0.3             Block j S l i g h t   d a m a g e if   0.3 < BBCR j 0.5     Block j M o d e r a t e   d a m a g e if   BBCR j > 0.5             Block j S e r i o u s   d a m a g e ,
if   BBCR j 0.25             grid j S l i g h t   d a m a g e if   0.25 < BBCR j 0.5     grid j M o d e r a t e   d a m a g e if   BBCR j > 0.5             grid j S e r i o u s   d a m a g e ,
The building damage detection results of the H α ρ method [29] and the PWMF method [1] were also converted into the damage maps and compared with the results of the proposed method.
The accuracy evaluation results and comparison results in the Yushu study site are shown in Figure 29 and Table 7. It is notable that the block-level damage map from the proposed method is the most similar to the reference map. Compared with the damage maps from the other two methods, almost all obliquely-oriented standing building areas in the north-east of Yushu County were correctly assessed as slight damage with the proposed method. Moreover, the proposed method correctly assessed the special serious damaged areas as serious damage, as shown in the southern part of the Yushu County. Table 7 shows that the proposed method obtained the highest detection rate for each damage level and the overall accuracy of building damage detection was up to 82.3%.
To verify the applicability of the proposed method, the comparison and accuracy evaluation were also carried out in the Ishinomaki and Mashiki town study sites, as shown in Figure 30 and Figure 31, and Table 8 and Table 9. The results show that the proposed method assessed the obliquely-oriented standing building areas correctly in both Ishinomaki and Mashiki town study sites, and obtained the most accurate detection results compared with the other two methods.
The figures and tables show that the proposed method is effective and reliable. The overall accuracy of the building damage detection was more than 78% in the three study sites, and up to 97% in the Ishinomaki study site. Compared with other methods, the proposed method can effectively reduce the misclassification of the slight damage areas while also obtaining the most accurate detection rate in serious damage areas. Overall, these results prove the effectiveness of the proposed method. However, it is also notable that the detection rate of moderate damage areas is not very high. This seems to be a common problem for all methods. In our opinion, the main cause of this problem is that equivalent collapsed and standing buildings intermingle in the moderate damage areas, which results in the lack of the dominant characteristics in these areas; thus, the pixels in these areas are difficult to classify correctly. This problem will be one of the areas of focus in our future research.

5. Discussion

5.1. The Selection of Parameters for the Calculation of Texture Features

5.1.1. Window Size

Texture features are significantly influenced by the size of sliding windows. An undersized window is susceptible to speckle noise, whereas an oversized window is likely to ignore the detailed information. A suitable window size is important for the calculation of texture features. When choosing the window size, both the image resolution and the complexity of the land cover distribution in the study area are important factors. To choose a suitable window size for the calculation of texture features in each study site, we conducted comparative experiments of different window sizes in all study sites. For each study site, the window size ranged from 3 × 3 to 17 × 17 and the GLCM texture features were calculated under each window size. We performed building damage detection with texture features under different window sizes and compared their overall accuracy. The comparison results are shown in Figure 32.
It can be observed that the building damage detection had the best performance with a 9 × 9 window for Yushu and Ishinomaki study sites. In our opinion, this was mainly because the PolSAR data in Yushu and Ishinomaki study sites have a similar resolution after multi-look processing, and the sizes of the blocks used to evaluate the accuracy in these two study sites are also similar. For the Mashiki town study site, the overall accuracy of building damage detection appears to show an irregular variation with the variation of the window size. We think the main reason for this result is that the grid used to evaluate the accuracy of building damage detection result in Mashiki town was relatively small, which makes the overall accuracy very sensitive to any small changes of the detection results. Compared with other window sizes, the 5 × 5 window size shows a better result in the Mashiki town study site. In addition, a small window size is more suitable to retain detailed information for the Mashiki town study site because the scale of this study site is significantly smaller than other two study sites. Therefore, in our experiments, the window size was set as 9 × 9 pixels for Yushu and Ishinomaki study sites, and 5 × 5 pixels for the Mashiki town study site.

5.1.2. Direction

For the calculation of GLCM texture features, direction is also an important parameter. Generally, there are four directions for the calculation of GLCM texture features, which are 0 ° ,   45 ° ,   90 ° ,   and   135 ° . To choose the most suitable direction for each study site, we conducted a comparative experiment of different directions of texture features. The results are shown in Figure 33.
The comparison shows that the calculation direction of texture features has little effect on detection results. In all three study sites, the difference of the overall accuracies of building damage detection was less than 7% when the calculation direction of texture features ranged from 0 ° to 135 ° . Therefore, the selection of calculation direction of texture features can be ignored generally when applying our method to rapid detection of building damage. In this study, to obtain the best results for building damage detection in each study site, we set the calculation direction of texture features as 45 ° in Yushu and Ishinomaki study sites and 90 ° in the Mashiki town study site, according to Figure 33. We preliminarily think that the best direction for calculating the texture features in one study area may be related to the flight direction of the radar and the main arrangement direction of the buildings in that area. More studies and analyses will focus on this aspect in the future.

5.2. The Comparison of Two Evaluation Methods

Accuracy evaluation is important for verifying the effectiveness of the proposed method. A suitable evaluation method can reflect the differences between the experiment results and the reference maps more objectively and accurately. In previous research, two evaluation methods have frequently been used for the evaluation of block/grid-level damage maps. The first is the block-count-based evaluation method. This uses the whole block or grid as the unit to count the proportions of correct and incorrect classifications and generate the confusion matrix, as shown in Figure 34a. The other approach is the pixel-count-based evaluation method. This compares the experimental and reference maps block by block (or grid by grid), but generates the confusion matrix by counting the number of pixels contained in the correctly/incorrectly classified blocks (or grids) rather than by counting the number of blocks, as shown in Figure 34b. Compared with the block-count-based method, the pixel-count-based method is more impartial because the size of the pixel is relatively fixed so that the result is not impacted by the size of the blocks. However, the block-count-based method is easier to implement.
Many studies have used the block-count-based method to evaluate the experiment results [4,30] because it is simpler. However, we think that the block-count-based method should only be used when the size of all blocks in one study site is uniform, otherwise it will introduce obvious evaluation error caused by the size of the blocks. To illustrate this, we compared the performance of two evaluation methods in the three study sites. The results are shown in Table 10.
It can be observed that in Yushu and Ishinomaki study sites, where the experiment results are at block level and all blocks have different sizes, the evaluation results obtained by the block-count-based method were significantly different from the evaluation results obtained by the pixel-count-based method. The difference of overall accuracies was up to 10%. As mentioned in Section 2, the blocks in these two study sites were generated according to the road information and the similarity of the damage of buildings, so they have non-uniform sizes. In the Mashiki town study site, the evaluation results obtained by the two methods were the same. This is mainly because the experimental and reference damage maps in Mashiki town are at grid level and all grids have a uniform size in the whole study site. Therefore, only when the blocks in one study site have uniform size does the block-count-based method have similar performance to the pixel-count-based method. In other cases, we recommend using the pixel-count-based method to evaluate the accuracy.

6. Conclusions

As an active remote sensing technology, SAR has shown strong potential for building damage detection because it can provide a quick response and large area monitoring after a disaster. Many SAR-based change detection methods have been proposed for building damage detection. For these methods, the acquisition of pre-event SAR images that have the same imaging geometry and appropriate baseline to the post-event SAR images is essential. However, for unpredictable natural disasters, suitable pre-event SAR images are not always available. In this circumstance, many methods for building damage detection using single post-event PolSAR data have been developed. Due to the absence of prior information before the disaster, some new problems arise for building damage detection using only post-event PolSAR data, for example, the misclassification between non-building areas and collapsed buildings, and the confusion between obliquely-oriented buildings and collapsed buildings. To address these problems and improve detection accuracy, a new method for building damage detection using only post-event PolSAR data was proposed in this study. Through a series comparison, evaluation, and analyses, the proposed method was proven to effectively address the problems and significantly improve the accuracy of building damage detection.
Experiments and analyses were implemented on Radarsat-2 data from the Yushu earthquake, ALOS-1 PALSAR data from the 2011 Tohoku tsunami, and ALOS-2 data from the 2016 Kumamoto earthquake. The proposed method adopted a two-step classification strategy to detect building damage. In the first step, the proposed method provides a new solution for built-up area extraction in post-disaster PolSAR image. The analyses show that the π/4 double-bounce scattering component of Pauli decomposition, the radar vegetation index feature, and the intensity component of the Shannon entropy have great ability to distinguish different kinds of non-building areas from built-up areas. Based on this, a three-feature-based supervised classification method was developed to remove non-building areas and extract built-up areas in post-disaster PolSAR images. The experiments in three study sites show that the proposed method can improve the accuracy of built-up area extraction to over 90%, which is around 10% higher than other methods. In the second step, by developing the OPCE matching algorithm, the proposed method firstly created a new polarization feature—the feature MaxC . The analyses show that the feature MaxC can not only effectively distinguish orthogonally-oriented standing buildings from collapsed buildings, but can also distinguish obliquely-oriented standing buildings from collapsed buildings. The J–M distance between collapsed and obliquely-oriented buildings in the feature MaxC is more than five times higher than that in the circular correlation coefficient ρ , double-bounce scattering component of Yamaguchi decomposition P d , and the total power (Span). Using the feature MaxC and eight GLCM texture features together, a multi-feature-based RF classification method was developed in the second step to classify the collapsed and standing buildings in built-up areas. The experiment results show that the overall accuracy of the proposed method for building damage detection was 82.3%, 97.4%, and 78.5% in Yushu, Ishinomaki, and Mashiki town study sites, respectively, even with the relatively low-resolution space-borne PolSAR images. Compared with the H α ρ and PWMF methods, the accuracy for the detection of slightly damaged, moderately damaged, and seriously damaged areas improved by about 20%, 8%, and 10%, respectively, using the proposed method.
In addition, two evaluation methods for evaluating the block/grid-level building damage detection results were compared in this study. The experiment results revealed that when the blocks in one study site do not have uniform size, the evaluation results obtained by the two methods have significant differences and the results from pixel-count-based evaluation method are more reliable. Therefore, in most cases, we recommend using the pixel-count-based evaluation method to evaluate the result of building damage detection. In the future, the improvement of the extraction accuracy of moderately damaged areas will be further studied. In addition, due to the difficulty in obtaining samples, we will also pay more attention to the unsupervised method for building damage detection using single post-event PolSAR data.

Author Contributions

Conceived and designed the study, Q.Z., Y.N. and H.Z.; methodology, Q.Z. and Y.N.; source code, Y.N. and Q.W.; investigation, data processing, analysis, validation, visualization, writing—original draft preparation, Y.N.; writing—review and editing, Q.Z. and Y.N.; supervision, Q.Z.; project administration, Q.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China (NSFC), grant number 41571337.

Data Availability Statement

Restrictions apply to the availability of these data. ALOS PALSAR PolSAR data from the 2011 Tohoku tsunami and ALOS-2 PALSAR-2 PolSAR data from the 2016 Kumamoto earthquake were obtained from JAXA and are available at https://auig2.jaxa.jp/openam/UI/Login?goto=http%3A%2F%2Fal2mwb01%3A80%2Fips%2Fhome%3Flanguage%3Den_U with the permission of JAXA. Radarsat-2 PolSAR data from the 2010 Yushu earthquake was obtained from the research group of the China Earthquake Administration led by Professor Jingfa Zhang under the cooperation of 863 Program research project.

Acknowledgments

We are grateful to Jian Jiao from Peking University for providing guidance to this work and providing help with project administration. We are also grateful to the research group of the China Earthquake Administration led by Jingfa Zhang for their sharing of Radarsat-2 data under the cooperation of 863 Program research project. The ALOS-1 PALSAR data of Ishinomaki and ALOS-2 PALSAR-2 data of Mashiki town were provided by JAXA under the ALOS-2 RA-6 Research Project (No. 3319). Our thanks are also extended to Jian Yang from Tsinghua University, Zezhong Wang from Peking University for their academic communication and good suggestions. We would also like to thank ESA-ESRIN and I.E.T.R-UMR CNRS 6164 for providing PolSARpro software to guarantee the fulfillment of this work. We would also like to show our great gratitude to the anonymous reviewers for their advice on improving the quality of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhai, W.; Huang, C.L.; Pei, W.S. Building Damage Assessment Based on the Fusion of Multiple Texture Features Using a Single Post-Earthquake PolSAR Image. Remote Sens. 2019, 11, 897. [Google Scholar] [CrossRef] [Green Version]
  2. Ji, Y.Q.; Sumantyo, J.T.S.; Chua, M.Y.; Waqar, M.M. Earthquake/Tsunami Damage Level Mapping of Urban Areas Using Full Polarimetric SAR Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 2296–2309. [Google Scholar] [CrossRef]
  3. Chen, S.W.; Wang, X.S.; Xiao, S.P. Urban Damage Level Mapping Based on Co-Polarization Coherence Pattern Using Multitemporal Polarimetric SAR Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 2657–2667. [Google Scholar] [CrossRef]
  4. Ji, Y.Q.; Sri Sumantyo, J.T.S.; Chua, M.Y.; Waqar, M.M. Earthquake/Tsunami Damage Assessment for Urban Areas Using Post-Event PolSAR Data. Remote Sens. 2018, 10, 1088. [Google Scholar] [CrossRef] [Green Version]
  5. Kalantar, B.; Ueda, N.; Al-Najjar, H.A.H.; Halin, A.A. Assessment of Convolutional Neural Network Architectures for Earthquake-Induced Building Damage Detection based on Pre- and Post-Event Orthophoto Images. Remote Sens. 2020, 12, 3529. [Google Scholar] [CrossRef]
  6. Moya, L.; Yamazaki, F.; Liu, W.; Yamada, M. Detection of Collapsed Buildings from Lidar Data Due to the 2016 Kumamoto Earthquake in Japan. Nat. Hazards Earth Syst. Sci. 2018, 18, 65–78. [Google Scholar] [CrossRef] [Green Version]
  7. Janalipour, M.; Mohammadzadeh, A. Building Damage Detection Using Object-Based Image Analysis and ANFIS From High-Resolution Image (Case Study: BAM Earthquake, Iran). IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 1937–1945. [Google Scholar] [CrossRef]
  8. Kaya, G.T.; Musaoglu, N.; Ersoy, O.K. Damage Assessment of 2010 Haiti Earthquake with Post-Earthquake Satellite Image by Support Vector Selection and Adaptation. Photogramm. Eng. Remote Sens. 2011, 77, 1025–1035. [Google Scholar] [CrossRef]
  9. Nex, F.; Duarte, D.; Tonolo, F.G.; Kerle, N. Structural Building Damage Detection with Deep Learning: Assessment of a State-of-the-Art CNN in Operational Conditions. Remote Sens. 2019, 11, 2765. [Google Scholar] [CrossRef] [Green Version]
  10. Rupnik, E.; Nex, F.; Toschi, I.; Remondino, F. Contextual Classification Using Photometry and Elevation Data for Damage Detection After an Earthquake Event. Eur. J. Remote Sens. 2018, 51, 543–557. [Google Scholar] [CrossRef] [Green Version]
  11. Nex, F.; Duarte, D.; Steenbeek, A.; Kerle, N. Towards Real-Time Building Damage Mapping with Low-Cost UAV Solutions. Remote Sens. 2019, 11, 287. [Google Scholar] [CrossRef] [Green Version]
  12. Ge, P.L.; Gokon, H.; Meguro, K. A Review on Synthetic Aperture Radar-based Building Damage Assessment in Disasters. Remote Sens. Environ. 2020, 240, 111693. [Google Scholar] [CrossRef]
  13. Matsuoka, M.; Yamazaki, F. Use of Satellite SAR Intensity Imagery for Detecting Building Areas Damaged Due to Earthquakes. Earthq. Spectra 2004, 20, 975–994. [Google Scholar] [CrossRef]
  14. Matsuoka, M.; Nojima, N. Building Damage Estimation by Integration of Seismic Intensity Information and Satellite L-band SAR Imagery. Remote Sens. 2010, 2, 2111–2126. [Google Scholar] [CrossRef] [Green Version]
  15. Liu, W.; Yamazaki, F. Extraction of Collapsed Buildings in the 2016 Kumamoto Earthquake Using Multi-Temporal PALSAR-2 Data. J. Disaster Res. 2017, 12, 241–250. [Google Scholar] [CrossRef]
  16. Hoffmann, J. Mapping Damage During the Bam (Iran) Earthquake Using Interferometric Coherence. Int. J. Remote Sens. 2007, 28, 1199–1216. [Google Scholar] [CrossRef]
  17. Chen, S.W.; Sato, M. Tsunami Damage Investigation of Built-Up Areas Using Multitemporal Spaceborne Full Polarimetric SAR Images. IEEE Trans. Geosci. Remote Sens. 2013, 51, 1985–1997. [Google Scholar] [CrossRef]
  18. Yamaguchi, Y. Disaster Monitoring by Fully Polarimetric SAR Data Acquired With ALOS-PALSAR. Proc. IEEE 2012, 100, 2851–2860. [Google Scholar] [CrossRef]
  19. Park, S.-E.; Jung, Y.T. Detection of Earthquake-Induced Building Damages Using Polarimetric SAR Data. Remote Sens. 2020, 12, 137. [Google Scholar] [CrossRef] [Green Version]
  20. Arciniegas, G.A.; Bijker, W.; Kerle, N.; Tolpekin, V.A. Coherence- and Amplitude-based Analysis of Seismogenic Damage in Bam, Iran, Using ENVISAT ASAR Data. IEEE Trans. Geosci. Remote Sens. 2007, 45, 1571–1581. [Google Scholar] [CrossRef]
  21. Zhang, X.D.; Liu, W.X.; He, S.G. Urban Change Detection in TerraSAR Image Using the Difference Method and SAR Coherence Coefficient. J. Eng. Sci. Technol. Rev. 2018, 11, 18–23. [Google Scholar] [CrossRef]
  22. Bai, Y.B.; Adriano, B.; Mas, E.; Koshimura, S. Machine Learning Based Building Damage Mapping from the ALOS-2/PALSAR-2 SAR Imagery: Case Study of 2016 Kumamoto Earthquake. J. Disaster Res. 2017, 12, 646–655. [Google Scholar] [CrossRef]
  23. Natsuaki, R.; Nagai, H.; Tomii, N.; Tadono, T. Sensitivity and Limitation in Damage Detection for Individual Buildings Using InSAR Coherence—A Case Study in 2016 Kumamoto Earthquakes. Remote Sens. 2018, 10, 245. [Google Scholar] [CrossRef] [Green Version]
  24. Ge, P.L.; Gokon, H.; Meguro, K. Building Damage Assessment Using Intensity SAR Data with Different Incidence Angles and Longtime Interval. J. Disaster Res. 2019, 14, 456–465. [Google Scholar] [CrossRef]
  25. Saha, S.; Bovolo, F.; Bruzzone, L. Building Change Detection in VHR SAR Images via Unsupervised Deep Transcoding. IEEE Trans. Geosci. Remote Sens. 2020, 1–13. [Google Scholar] [CrossRef]
  26. Liu, W.; Yamazaki, F.; Gokon, H.; Koshimura, S.I. Extraction of Tsunami-flooded Areas and Damaged Buildings in the 2011 Tohoku-oki Earthquake from TerraSAR-X Intensity Images. Earthq. Spectra 2013, 29, S183–S200. [Google Scholar] [CrossRef] [Green Version]
  27. Benediktsson, J.A.; Bovolo, F.; Bruzzone, L.; Bruzzone, L.; Bovolo, F.; Saha, S. Destroyed-buildings Detection from VHR SAR Images Using Deep Features. In Proceedings of the Image and Signal Processing for Remote Sensing XXIV, Berlin, Germany, 10–12 September 2018. [Google Scholar]
  28. Guo, H.D. Study of Detecting Method with Advanced Airborne and Spaceborne Synthetic Aperture Radar Data for Collapsed Urban Buildings from the Wenchuan Earthquake. J. Appl. Remote Sens. 2009, 3, 031695. [Google Scholar] [CrossRef]
  29. Li, X.W.; Guo, H.D.; Zhang, L.; Chen, X.; Liang, L. A New Approach to Collapsed Building Extraction Using RADARSAT-2 Polarimetric SAR Imagery. IEEE Geosci. Remote Sens. Lett. 2012, 9, 677–681. [Google Scholar] [CrossRef]
  30. Zhao, L.L.; Yang, J.; Li, P.X.; Zhang, L.P.; Shi, L.; Lang, F.K. Damage Assessment in Urban Areas Using Post-Earthquake Airborne PolSAR Imagery. Int. J. Remote Sens. 2013, 34, 8952–8966. [Google Scholar] [CrossRef]
  31. Shi, L.; Sun, W.D.; Yang, J.; Li, P.X.; Lu, L.J. Building Collapse Assessment by the Use of Postearthquake Chinese VHR Airborne SAR. IEEE Geosci. Remote Sens. Lett. 2015, 12, 2021–2025. [Google Scholar] [CrossRef]
  32. Sun, W.D.; Shi, L.; Yang, J.; Li, P.X. Building Collapse Assessment in Urban Areas Using Texture Information from Postevent SAR Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 3792–3808. [Google Scholar] [CrossRef]
  33. Bai, Y.B.; Adriano, B.; Mas, E.; Koshimura, S. Building Damage Assessment in the 2015 Gorkha, Nepal, Earthquake Using Only Post-Event Dual Polarization Synthetic Aperture Radar Imagery. Earthq. Spectra 2017, 33, S185–S195. [Google Scholar] [CrossRef]
  34. Chen, Q.H.; Nie, Y.L.; Li, L.L.; Liu, X.G. Buildings Damage Assessment Using Texture Features of Polarization Decomposition Components. J. Remote Sens. 2017, 21, 955–965. (In Chinese) [Google Scholar] [CrossRef]
  35. Li, L.L.; Liu, X.G.; Chen, Q.H.; Yang, S. Building Damage Assessment from PolSAR Data Using Texture Parameters of Statistical Model. Comput. Geosci. 2018, 113, 115–126. [Google Scholar] [CrossRef]
  36. Gokon, H.; Koshimura, S. Mapping of Building Damage of the 2011 Tohoku Earthquake Tsunami in Miyagi Prefecture. Coast. Eng. J. 2012, 54, 1250006-1–1250006-12. [Google Scholar] [CrossRef]
  37. Okada, S.; Takai, N. Classifications of Structural Types and Damage Patterns of Buildings for Earthquake Field Investigation. J. Struct. Constr. Eng. (Trans. AIJ) 1999, 64, 65–72. [Google Scholar] [CrossRef] [Green Version]
  38. Quick Report of the Field Survey on the Building Damage by the 2016 Kumamoto Earthquake, Technical Note No. 929. Available online: http://www.nilim.go.jp/lab/bcg/siryou/tnn/tnn0929.htm (accessed on 30 September 2019). (In Japanese)
  39. Yamaguchi, Y.; Sato, A.; Boerner, W.; Sato, R.; Yamada, H. Four-Component Scattering Power Decomposition with Rotation of Coherency Matrix. IEEE Trans. Geosci. Remote Sens. 2011, 49, 2251–2258. [Google Scholar] [CrossRef]
  40. Lee, J.S.; Pottier, E. Polarimetric Radar Image: From Basics to Applications; CRC Press: Boca Raton, FL, USA, 2009; pp. 214–215. [Google Scholar]
  41. Tong, S.W.; Liu, X.G.; Chen, Q.H.; Zhang, Z.J.; Xie, G.Q. Multi-Feature Based Ocean Oil Spill Detection for Polarimetric SAR Data Using Random Forest and the Self-Similarity Parameter. Remote Sens. 2019, 11, 451. [Google Scholar] [CrossRef] [Green Version]
  42. Van Zyl, J.J. Application of Cloude’s Target Decomposition Theorem to Polarimetric Imaging Radar Data. In Radar Polarimetry; SPIE: San Diego, USA, 1992; pp. 184–191. [Google Scholar]
  43. Cloude, S.R.; Pottier, E. A Review of Target Decomposition Theorems in Radar Polarimetry. IEEE Trans. Geosci. Remote Sens. 1996, 34, 498–518. [Google Scholar] [CrossRef]
  44. Refregier, P.; Morio, J. Shannon Entropy of Partially Polarized and Partially Coherent Light with Gaussian Fluctuations. J. Opt. Soc. Am. A Opt. Image Sci. Vis. 2006, 23, 3036–3044. [Google Scholar] [CrossRef]
  45. Morio, J.; Refregier, P.; Goudail, F.; Dubois-Fernandez, P.C.; Dupuis, X. Information Theory-based Approach for Contrast Analysis in Polarimetric and/or Interferometric SAR Images. IEEE Trans. Geosci. Remote Sens. 2008, 46, 2185–2196. [Google Scholar] [CrossRef]
  46. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  47. Houborg, R.; McCabe, M.F. A Hybrid Training Approach for Leaf Area Index Estimation via Cubist and Random Forests Machine-learning. ISPRS J. Photogramm. Remote Sens. 2018, 135, 173–188. [Google Scholar] [CrossRef]
  48. Yang, J.; Dong, G.; Peng, Y.; Yamaguchi, Y.; Yamada, H. Generalized Optimization of Polarimetric Contrast Enhancement. IEEE Geosci. Remote Sens. Lett. 2004, 1, 171–174. [Google Scholar] [CrossRef]
  49. Ioannidis, G.A. Optimum Antenna Polarizations for Target Discrimination in Clutter. IEEE Trans. Antennas Propag. 1979, 27, 357–363. [Google Scholar] [CrossRef]
  50. Kostinski, A.B.; Boerner, W.M. On the Polarimetric Contrast Optimization. IEEE Trans. Antennas Propag. 1987, 35, 988–991. [Google Scholar] [CrossRef]
  51. Swartz, A.A.; Yueh, H.A.; Kong, J.A.; Novak, L.M.; Shin, R.T. Optimal Polarizations for Achieving Maximum Contrast in Radar Images. J. Geophys. Res. Solid Earth 1988, 93, 15252–15260. [Google Scholar] [CrossRef] [Green Version]
  52. Mott, H.; Boerner, W.M. Polarimetric Contrast Enhancement Coefficients for Perfecting High-resolution POL-SAR/SAL Image Feature Extraction. In Wideband Interferometric Sensing and Imaging Polarimetry; SPIE: San Diego, CA, USA, 1997; pp. 106–117. [Google Scholar]
  53. Yang, J.; Yamaguchi, Y.; Boerner, W.M.; Lin, S. Numerical Methods for Solving the Optimal Problem of Contrast Enhancement. IEEE Trans. Geosci. Remote Sens. 2000, 38, 965–971. [Google Scholar] [CrossRef]
  54. Yin, J.J.; Zhou, Z.S.; Wooil, M.M.; Jin, R.; Caccetta, P.A. The Use of a Modified GOPCE Method for Forest and Nonforest Discrimination. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1076–1080. [Google Scholar] [CrossRef]
  55. Yang, J.; Cui, Y. A Novel Method for Ship Detection in Polarimetric SAR Images Using GOPCE. In Proceedings of the IET International Radar Conference 2009, Guilin, China, 20–22 April 2009. [Google Scholar]
  56. Zhang, H.Z.; Wang, Q.; Zeng, Q.M.; Jiao, J. A Novel Approach to Building Collapse Detection from Post-seismic Polarimetric SAR Imagery by Using Optimization of Polarimetric Contrast Enhancement. In Proceedings of the International Geoscience and Remote Sensing Symposium (IGARSS 2015), Milan, Italy, 26–31 July 2015; pp. 3270–3273. [Google Scholar]
  57. Dabboor, M.; Howell, S.; Shokr, M.; Yackel, J. The Jeffries–Matusita Distance for the Case of Complex Wishart Distribution As a Separability Criterion for Fully Polarimetric SAR Data. Int. J. Remote Sens. 2014, 35, 6859–6873. [Google Scholar] [CrossRef]
  58. Haralick, R.M.; Shanmugam, K.; Dinstein, I. Textural Features for Image Classification. IEEE Trans. Syst. Man Cybern. 1973, 610–621. [Google Scholar] [CrossRef] [Green Version]
  59. Gong, L.; Wang, C.; Wu, F.; Zhang, J.; Zhang, H.; Li, Q. Earthquake-Induced Building Damage Detection with Post-Event Sub-Meter VHR TerraSAR-X Staring Spotlight Imagery. Remote Sens. 2016, 8, 887. [Google Scholar] [CrossRef] [Green Version]
  60. Wu, F.; Gong, L.; Wang, C.; Zhang, H.; Zhang, B.; Xie, L. Signature Analysis of Building Damage with TerraSAR-X New Staring SpotLight Mode Data. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1696–1700. [Google Scholar] [CrossRef]
  61. Van der Linden, S.; Rabe, A.; Held, M.; Jakimow, B.; Leitao, P.J.; Okujeni, A.; Schwieder, M.; Suess, S.; Hostert, P. The EnMAP-Box-A Toolbox and Application Programming Interface for EnMAP Data Processing. Remote Sens. 2015, 7, 11249–11266. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The basic information of the Radarsat-2 fully polarimetric synthetic aperture radar (PolSAR) data in the Yushu study site: (a) Google Earth image, showing the coverage of the PolSAR data (red rectangle) and the location of Yushu County (yellow rectangle); (b) the Pauli RGB image of the PolSAR data.
Figure 1. The basic information of the Radarsat-2 fully polarimetric synthetic aperture radar (PolSAR) data in the Yushu study site: (a) Google Earth image, showing the coverage of the PolSAR data (red rectangle) and the location of Yushu County (yellow rectangle); (b) the Pauli RGB image of the PolSAR data.
Remotesensing 13 01146 g001
Figure 2. The block-level reference map of building damage in the Yushu study site.
Figure 2. The block-level reference map of building damage in the Yushu study site.
Remotesensing 13 01146 g002
Figure 3. The Pauli RGB image of the ALOS-1 PolSAR data in Ishinomaki city.
Figure 3. The Pauli RGB image of the ALOS-1 PolSAR data in Ishinomaki city.
Remotesensing 13 01146 g003
Figure 4. The ground-truth map interpreted by Tohoku University and the University of Tokyo [36].
Figure 4. The ground-truth map interpreted by Tohoku University and the University of Tokyo [36].
Remotesensing 13 01146 g004
Figure 5. The block-level reference map of building damage in the Ishinomaki study site.
Figure 5. The block-level reference map of building damage in the Ishinomaki study site.
Remotesensing 13 01146 g005
Figure 6. The Pauli RGB image of the ALOS-2 PolSAR data in Mashiki town.
Figure 6. The Pauli RGB image of the ALOS-2 PolSAR data in Mashiki town.
Remotesensing 13 01146 g006
Figure 7. The three-grade grid-level reference map of building damage in Mashiki town.
Figure 7. The three-grade grid-level reference map of building damage in Mashiki town.
Remotesensing 13 01146 g007
Figure 8. Flowchart of the proposed building damage detection method. PolSAR, polarimetric synthetic aperture radar; OPCE, optimization of polarimetric contrast enhancement; GLCM, gray level co-occurrence matrix; RVI, the radar vegetation index (RVI); SEI, the intensity component of the Shannon entropy.
Figure 8. Flowchart of the proposed building damage detection method. PolSAR, polarimetric synthetic aperture radar; OPCE, optimization of polarimetric contrast enhancement; GLCM, gray level co-occurrence matrix; RVI, the radar vegetation index (RVI); SEI, the intensity component of the Shannon entropy.
Remotesensing 13 01146 g008
Figure 9. Scatter diagrams of the π/4 double-bounce scattering component of the Pauli decomposition for the non-building samples and built-up area samples in the Yushu study site.
Figure 9. Scatter diagrams of the π/4 double-bounce scattering component of the Pauli decomposition for the non-building samples and built-up area samples in the Yushu study site.
Remotesensing 13 01146 g009
Figure 10. Built-up area samples (red) and non-building area samples (water samples—blue, road samples—purple, bare soil samples—brown, mountain vegetation samples—green, farmland samples—orange) in different study sites: (a) Ishinomaki study site; (b) Mashiki town study site.
Figure 10. Built-up area samples (red) and non-building area samples (water samples—blue, road samples—purple, bare soil samples—brown, mountain vegetation samples—green, farmland samples—orange) in different study sites: (a) Ishinomaki study site; (b) Mashiki town study site.
Remotesensing 13 01146 g010
Figure 11. The probability density function (pdf) of the Pauli π/4 feature of six kinds of samples in different study sites: (a) Ishinomaki study site; (b) Mashiki town study site.
Figure 11. The probability density function (pdf) of the Pauli π/4 feature of six kinds of samples in different study sites: (a) Ishinomaki study site; (b) Mashiki town study site.
Remotesensing 13 01146 g011
Figure 12. The pdf of the radar vegetation index (RVI) of mountain vegetation, farmland, and built-up area samples in different study sites: (a) Ishinomaki study site; (b) Mashiki town study site.
Figure 12. The pdf of the radar vegetation index (RVI) of mountain vegetation, farmland, and built-up area samples in different study sites: (a) Ishinomaki study site; (b) Mashiki town study site.
Remotesensing 13 01146 g012
Figure 13. The pdf of the intensity component of the Shannon entropy (SEI) of farmland and built-up area samples in different study sites: (a) Ishinomaki study site; (b) Mashiki town study site.
Figure 13. The pdf of the intensity component of the Shannon entropy (SEI) of farmland and built-up area samples in different study sites: (a) Ishinomaki study site; (b) Mashiki town study site.
Remotesensing 13 01146 g013
Figure 14. Optical images (data source: Google earth) of: (a) collapsed buildings; (b) orthogonally-oriented buildings; (c) obliquely-oriented buildings.
Figure 14. Optical images (data source: Google earth) of: (a) collapsed buildings; (b) orthogonally-oriented buildings; (c) obliquely-oriented buildings.
Remotesensing 13 01146 g014
Figure 15. The pdfs of the circular polarization correlation coefficient ρ feature: (a) pdfs of collapsed buildings and orthogonally-oriented buildings; (b) pdfs of collapsed buildings and obliquely-oriented buildings.
Figure 15. The pdfs of the circular polarization correlation coefficient ρ feature: (a) pdfs of collapsed buildings and orthogonally-oriented buildings; (b) pdfs of collapsed buildings and obliquely-oriented buildings.
Remotesensing 13 01146 g015
Figure 16. The pdfs of the double-bounce scattering component of the Yamaguchi decomposition P d feature: (a) pdfs of collapsed buildings and orthogonally-oriented buildings; (b) pdfs of collapsed buildings and obliquely-oriented buildings.
Figure 16. The pdfs of the double-bounce scattering component of the Yamaguchi decomposition P d feature: (a) pdfs of collapsed buildings and orthogonally-oriented buildings; (b) pdfs of collapsed buildings and obliquely-oriented buildings.
Remotesensing 13 01146 g016
Figure 17. The pdfs of the total power (Span) feature: (a) pdfs of collapsed buildings and orthogonally-oriented buildings; (b) pdfs of collapsed buildings and obliquely-oriented buildings.
Figure 17. The pdfs of the total power (Span) feature: (a) pdfs of collapsed buildings and orthogonally-oriented buildings; (b) pdfs of collapsed buildings and obliquely-oriented buildings.
Remotesensing 13 01146 g017
Figure 18. The optical images (data source: Google earth) of the special seriously damaged areas with a few typical orthogonally-oriented buildings (red circles).
Figure 18. The optical images (data source: Google earth) of the special seriously damaged areas with a few typical orthogonally-oriented buildings (red circles).
Remotesensing 13 01146 g018
Figure 19. The schematic diagram of the OPCE matching algorithm.
Figure 19. The schematic diagram of the OPCE matching algorithm.
Remotesensing 13 01146 g019
Figure 20. The results of the MaxC feature and the samples in three study sites: (a) the samples in the Yushu study site; (b) the result of the MaxC feature in the Yushu study site; (c) the samples in the Ishinomaki study site; (d) the result of the MaxC feature in the Ishinomaki study site; (e) the samples in the Mashiki town study site; (f) the result of the MaxC feature in the Mashiki town study site. In subfigures (a), (c), and (e), yellow rectangles show the target sample sets used for the OPCE matching algorithm; red patches show the collapsed building samples used to draw the pdfs of collapsed buildings in Figure 15, Figure 16 and Figure 17 and Figure 21, Figure 22 and Figure 23; green patches show the obliquely-oriented building samples used for drawing pdfs of obliquely-oriented buildings in Figure 15, Figure 16 and Figure 17 and Figure 21, Figure 22 and Figure 23; and blue patches show the orthogonally-oriented building samples used for drawing pdfs of orthogonally-oriented buildings in Figure 15, Figure 16 and Figure 17 and Figure 21, Figure 22 and Figure 23.
Figure 20. The results of the MaxC feature and the samples in three study sites: (a) the samples in the Yushu study site; (b) the result of the MaxC feature in the Yushu study site; (c) the samples in the Ishinomaki study site; (d) the result of the MaxC feature in the Ishinomaki study site; (e) the samples in the Mashiki town study site; (f) the result of the MaxC feature in the Mashiki town study site. In subfigures (a), (c), and (e), yellow rectangles show the target sample sets used for the OPCE matching algorithm; red patches show the collapsed building samples used to draw the pdfs of collapsed buildings in Figure 15, Figure 16 and Figure 17 and Figure 21, Figure 22 and Figure 23; green patches show the obliquely-oriented building samples used for drawing pdfs of obliquely-oriented buildings in Figure 15, Figure 16 and Figure 17 and Figure 21, Figure 22 and Figure 23; and blue patches show the orthogonally-oriented building samples used for drawing pdfs of orthogonally-oriented buildings in Figure 15, Figure 16 and Figure 17 and Figure 21, Figure 22 and Figure 23.
Remotesensing 13 01146 g020aRemotesensing 13 01146 g020b
Figure 21. The pdfs of the MaxC feature in the Yushu study site: (a) pdfs of collapsed buildings and orthogonally-oriented buildings; (b) pdfs of collapsed buildings and obliquely-oriented buildings.
Figure 21. The pdfs of the MaxC feature in the Yushu study site: (a) pdfs of collapsed buildings and orthogonally-oriented buildings; (b) pdfs of collapsed buildings and obliquely-oriented buildings.
Remotesensing 13 01146 g021
Figure 22. The pdfs of the MaxC feature in the Ishinomaki study site: (a) pdfs of collapsed buildings and orthogonally-oriented buildings; (b) pdfs of collapsed buildings and obliquely-oriented buildings.
Figure 22. The pdfs of the MaxC feature in the Ishinomaki study site: (a) pdfs of collapsed buildings and orthogonally-oriented buildings; (b) pdfs of collapsed buildings and obliquely-oriented buildings.
Remotesensing 13 01146 g022
Figure 23. The pdfs of the MaxC feature in the Mashiki town study site: (a) pdfs of collapsed buildings and orthogonally-oriented buildings; (b) pdfs of collapsed buildings and obliquely-oriented buildings.
Figure 23. The pdfs of the MaxC feature in the Mashiki town study site: (a) pdfs of collapsed buildings and orthogonally-oriented buildings; (b) pdfs of collapsed buildings and obliquely-oriented buildings.
Remotesensing 13 01146 g023
Figure 24. The classification results of collapsed and standing buildings in the Yushu study site: (a) only using the feature MaxC ; (b) using the feature MaxC and eight GLCM texture features together.
Figure 24. The classification results of collapsed and standing buildings in the Yushu study site: (a) only using the feature MaxC ; (b) using the feature MaxC and eight GLCM texture features together.
Remotesensing 13 01146 g024
Figure 25. The results of non-building area removal in Yushu with different methods: (a) the H α method; (b) the H α w i s h a r t classification method; (c) the proposed method.
Figure 25. The results of non-building area removal in Yushu with different methods: (a) the H α method; (b) the H α w i s h a r t classification method; (c) the proposed method.
Remotesensing 13 01146 g025
Figure 26. The results of non-building area removal in Ishinomaki with different methods: (a) the H α method; (b) the H α w i s h a r t classification method; (c) the proposed method.
Figure 26. The results of non-building area removal in Ishinomaki with different methods: (a) the H α method; (b) the H α w i s h a r t classification method; (c) the proposed method.
Remotesensing 13 01146 g026
Figure 27. The results of non-building area removal in Mashiki town with different methods: (a) the H α method; (b) the H α w i s h a r t classification method; (c) the proposed method.
Figure 27. The results of non-building area removal in Mashiki town with different methods: (a) the H α method; (b) the H α w i s h a r t classification method; (c) the proposed method.
Remotesensing 13 01146 g027
Figure 28. The detection results of the proposed method in different study sites: (a) the detection result of the Radarsat-2 PolSAR data in the Yushu study site; (b) the detection result of the ALOS-1 PolSAR data in the Ishinomaki study site; (c) the detection result of the ALOS-2 PolSAR data in the Mashiki town study site.
Figure 28. The detection results of the proposed method in different study sites: (a) the detection result of the Radarsat-2 PolSAR data in the Yushu study site; (b) the detection result of the ALOS-1 PolSAR data in the Ishinomaki study site; (c) the detection result of the ALOS-2 PolSAR data in the Mashiki town study site.
Remotesensing 13 01146 g028
Figure 29. Block-level reference map and damage maps in Yushu: (a) reference map; (b) damage map from the H α ρ method; (c) damage map from the PWMF method; (d) damage map from the proposed method.
Figure 29. Block-level reference map and damage maps in Yushu: (a) reference map; (b) damage map from the H α ρ method; (c) damage map from the PWMF method; (d) damage map from the proposed method.
Remotesensing 13 01146 g029
Figure 30. Block-level reference map and damage maps in Ishinomaki: (a) reference map; (b) damage map from the H α ρ method; (c) damage map from the PWMF method; (d) damage map from the proposed method.
Figure 30. Block-level reference map and damage maps in Ishinomaki: (a) reference map; (b) damage map from the H α ρ method; (c) damage map from the PWMF method; (d) damage map from the proposed method.
Remotesensing 13 01146 g030
Figure 31. Grid-level reference map and damage maps in Mashiki town. (a) Reference map; (b) damage map from the H α ρ method; (c) damage map from the PWMF method; (d) damage map from the proposed method.
Figure 31. Grid-level reference map and damage maps in Mashiki town. (a) Reference map; (b) damage map from the H α ρ method; (c) damage map from the PWMF method; (d) damage map from the proposed method.
Remotesensing 13 01146 g031
Figure 32. Overall accuracy (%) of building damage detection when using different window sizes to calculate texture features: (a) the results in the Yushu study site; (b) the results in the Ishinomaki study site; (c) the results in Mashiki town study site.
Figure 32. Overall accuracy (%) of building damage detection when using different window sizes to calculate texture features: (a) the results in the Yushu study site; (b) the results in the Ishinomaki study site; (c) the results in Mashiki town study site.
Remotesensing 13 01146 g032
Figure 33. Overall accuracy (%) of building damage detection when using different directions to calculate texture features: (a) the results in the Yushu study site; (b) the results in the Ishinomaki study site; (c) the results in the Mashiki town study site.
Figure 33. Overall accuracy (%) of building damage detection when using different directions to calculate texture features: (a) the results in the Yushu study site; (b) the results in the Ishinomaki study site; (c) the results in the Mashiki town study site.
Remotesensing 13 01146 g033
Figure 34. Diagram of two evaluation methods: (a) block-count-based evaluation method; (b) pixel-count-based evaluation method.
Figure 34. Diagram of two evaluation methods: (a) block-count-based evaluation method; (b) pixel-count-based evaluation method.
Remotesensing 13 01146 g034
Table 1. Jeffreys–Matusita (J–M) distance between obliquely-oriented buildings and collapsed buildings in different features in three study sites.
Table 1. Jeffreys–Matusita (J–M) distance between obliquely-oriented buildings and collapsed buildings in different features in three study sites.
Study SiteJ–M Distance between Obliquely-Oriented Buildings and Collapsed Buildings in
ρ P d Span M a x C
Yushu study site0.0340.0290.2531.088
Ishinomaki study site0.2660.1540.1560.963
Mashiki town study site0.0090.1030.0570.736
Table 2. J–M distance between affected collapsed buildings and standing buildings in different polarization features.
Table 2. J–M distance between affected collapsed buildings and standing buildings in different polarization features.
Study SiteJ–M Distance between Affected Collapsed Buildings and Standing Buildings in
ρ P d Span M a x C
Yushu study site0.1990.1590.2600.211
Table 3. J–M distance between affected collapsed buildings and standing buildings in different GLCM texture features.
Table 3. J–M distance between affected collapsed buildings and standing buildings in different GLCM texture features.
Study SiteJ–M Distance between Affected Collapsed Buildings and Standing Buildings in
SpanMaxCMeanVar 1Hom 2Con 3Dis 4Entr 5SeM 6Cor 7
Yushu study site0.2600.2110.6380.5070.5500.4860.5370.6400.6810.467
1 Var, variance; 2 Hom, homogeneity; 3 Con, contrast; 4 Dis, dissimilarity; 5 Entr, entropy; 6 SeM, second moment; 7 Cor, correlation.
Table 4. Confusion matrix of built-up and non-building area classification in the three study sites with different methods.
Table 4. Confusion matrix of built-up and non-building area classification in the three study sites with different methods.
H α   Method H α w i s h a r t
Classification Method
Proposed Method
Non-Building AreaBuilt-Up AreaNon-Building AreaBuilt-Up AreaNon-Building AreaBuilt-Up Area
YushuGround truth
Non-building area9120918411626436
Built-up area16284829211289
Prod. accu. 230.0%94.7%61.3%97.3%88.0%96.3%
OA 1: 62.5%OA: 79.3%OA: 92.2%
IshinomakiGround truth
Non-building area508492828172848152
Built-up area5994112187932968
Prod. accu.50.8%94.1%82.8%87.9%84.8%96.8%
OA: 72.5%OA: 85.4%OA: 90.8%
Mashiki townGround truth
Non-building area5214461304991
Built-up area4484863947019481
Prod. accu.10.4%97.2%92.2%94.0%99.8%96.2%
OA: 53.8%OA: 93.1%OA: 98.0%
1 OA, overall accuracy; 2 Prod. accu., producer’s accuracy.
Table 5. Error rates of different methods in three study sites.
Table 5. Error rates of different methods in three study sites.
H α   Method H α w i s h a r t
Classification Method
Proposed Method
Non-Building AreaBuilt-Up AreaNon-Building AreaBuilt-Up AreaNon-Building AreaBuilt-Up Area
YushuGround truth
Collapsed buildings10290162845295
Error rate3.4%5.3%1.7%
IshinomakiGround truth
Collapsed buildings44456984026494
Error rate8.8%19.6%1.2%
Mashiki townGround truth
Collapsed buildings223784235814386
Error rate5.5%10.5%3.5%
Table 6. The number of samples for multi-feature-based collapsed and standing buildings classification in three study sites.
Table 6. The number of samples for multi-feature-based collapsed and standing buildings classification in three study sites.
Study SitesCollapsed Building
Samples (Pixels)
Standing Building
Samples (Pixels)
Total
(Pixels)
Yushu7416481389
Ishinomaki6209241544
Mashiki5375601097
Table 7. Accuracy evaluation results of damage maps from different methods in the Yushu study site.
Table 7. Accuracy evaluation results of damage maps from different methods in the Yushu study site.
MethodDetection Rate of Different Damage Level (%)OA 1 (%)
Slight DamageModerate DamageSerious Damage
H α ρ method20.328.486.049.0
PWMF method66.314.076.857.3
Proposed method86.056.297.282.3
1 OA, overall accuracy of the building damage detection.
Table 8. Accuracy evaluation results of damage maps from different methods in Ishinomaki.
Table 8. Accuracy evaluation results of damage maps from different methods in Ishinomaki.
MethodDetection Rate of Different Damage Level (%)OA 1 (%)
Slight DamageModerate DamageSerious Damage
H α ρ method61.40.092.263.2
PWMF method64.610.354.362.7
Proposed method100.026.086.397.4
1 OA, overall accuracy of the building damage detection.
Table 9. Accuracy evaluation results of damage maps from different methods in Mashiki town.
Table 9. Accuracy evaluation results of damage maps from different methods in Mashiki town.
MethodDetection Rate of Different Damage Level (%)OA 1 (%)
Slight DamageModerate DamageSerious Damage
H α ρ method78.018.826.863.8
PWMF method73.915.322.959.8
Proposed method88.335.764.878.5
1 OA, overall accuracy of the building damage detection.
Table 10. The comparison of two evaluation methods in three study sites.
Table 10. The comparison of two evaluation methods in three study sites.
Block-Count-Based EvaluationPixel-Count-Based Evaluation
The Experimental ResultsThe Experimental Results
SlightModerateSeriousSlightModerateSerious
YushuReference
Slight282093302660
Moderate1194211640061002
Serious0622015889778
OA: 72.0%OA: 82.3%
IshinomakiReference
Slight430041,666100
Moderate410620521830
Serious3261265484538,465
OA: 84.7%OA: 97.4%
Mashiki townReference
Slight27231592591064168
Moderate131922442651733
Serious810342853521173
OA: 78.5%OA: 78.5%
OA, overall accuracy.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Nie, Y.; Zeng, Q.; Zhang, H.; Wang, Q. Building Damage Detection Based on OPCE Matching Algorithm Using a Single Post-Event PolSAR Data. Remote Sens. 2021, 13, 1146. https://doi.org/10.3390/rs13061146

AMA Style

Nie Y, Zeng Q, Zhang H, Wang Q. Building Damage Detection Based on OPCE Matching Algorithm Using a Single Post-Event PolSAR Data. Remote Sensing. 2021; 13(6):1146. https://doi.org/10.3390/rs13061146

Chicago/Turabian Style

Nie, Yuliang, Qiming Zeng, Haizhen Zhang, and Qing Wang. 2021. "Building Damage Detection Based on OPCE Matching Algorithm Using a Single Post-Event PolSAR Data" Remote Sensing 13, no. 6: 1146. https://doi.org/10.3390/rs13061146

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop