Figure 1.
xBD images and labels of undamaged buildings in which both the ground-truth and predicted labels are indicated by red and green colours, respectively. (a) Labels of buildings in predisaster image. (b) Identification results for predisaster images. (c) Labels of undamaged buildings in postdisaster image. (d) Identification results for postdisaster images.
Figure 1.
xBD images and labels of undamaged buildings in which both the ground-truth and predicted labels are indicated by red and green colours, respectively. (a) Labels of buildings in predisaster image. (b) Identification results for predisaster images. (c) Labels of undamaged buildings in postdisaster image. (d) Identification results for postdisaster images.
Figure 2.
Distribution of the disaster locations.
Figure 2.
Distribution of the disaster locations.
Figure 3.
The images in the (a–c) subfigures represent the predisaster images from Palu-TM, Santarosa-WF, Joplin-TD, respectively. The images in the (d–f) subfigures represent the postdisaster images from Palu-TM, Santarosa-WF, Joplin-TD, respectively.
Figure 3.
The images in the (a–c) subfigures represent the predisaster images from Palu-TM, Santarosa-WF, Joplin-TD, respectively. The images in the (d–f) subfigures represent the postdisaster images from Palu-TM, Santarosa-WF, Joplin-TD, respectively.
Figure 4.
The different levels of damage. The images in the first, second, and third rows are from Moore-TD, Santarosa-WF and Harvey-HC, respectively.
Figure 4.
The different levels of damage. The images in the first, second, and third rows are from Moore-TD, Santarosa-WF and Harvey-HC, respectively.
Figure 5.
The images in the (a–c) subfigures represent the postdisaster images from Palu-TM, Santarosa-WF and Joplin-TD, respectively. The images in the (d–f) subfigures represent the labels of Palu-TM, Santarosa-WF and Joplin-TD, respectively. Green equals no damage, yellow minor damage, orange major damage, and red destroyed.
Figure 5.
The images in the (a–c) subfigures represent the postdisaster images from Palu-TM, Santarosa-WF and Joplin-TD, respectively. The images in the (d–f) subfigures represent the labels of Palu-TM, Santarosa-WF and Joplin-TD, respectively. Green equals no damage, yellow minor damage, orange major damage, and red destroyed.
Figure 6.
Examples of problems in the xBD dataset.
Figure 6.
Examples of problems in the xBD dataset.
Figure 7.
Post-disaster images of parts of the areas that were impacted by the Moore TD, the Mexico EQ and the Nepal FD.
Figure 7.
Post-disaster images of parts of the areas that were impacted by the Moore TD, the Mexico EQ and the Nepal FD.
Figure 8.
The architecture of U-NASNetMobile.
Figure 8.
The architecture of U-NASNetMobile.
Figure 9.
The IoU under different evaluation modes over 14 disaster events.
Figure 9.
The IoU under different evaluation modes over 14 disaster events.
Figure 10.
The images in the (a,b) subfigures represent the ground truth and the identification results, respectively.
Figure 10.
The images in the (a,b) subfigures represent the ground truth and the identification results, respectively.
Figure 11.
Average IoU for each disaster event.
Figure 11.
Average IoU for each disaster event.
Figure 12.
The three transfer learning methods used in this paper.
Figure 12.
The three transfer learning methods used in this paper.
Figure 13.
The increasing rate of the IoU for each disaster.
Figure 13.
The increasing rate of the IoU for each disaster.
Figure 14.
The images in the (a–c) subfigures represent the ground truth, the result before fine-tuning, and the result after fine-tuning, respectively.
Figure 14.
The images in the (a–c) subfigures represent the ground truth, the result before fine-tuning, and the result after fine-tuning, respectively.
Figure 15.
Structure of the CycleGAN. The two one-way GANs share two generators, and each has a discriminator.
Figure 15.
Structure of the CycleGAN. The two one-way GANs share two generators, and each has a discriminator.
Figure 16.
The increasing rate of the IoU for each disaster.
Figure 16.
The increasing rate of the IoU for each disaster.
Figure 17.
Harvey-HC images and identification results before and after image translation. (a) Image before translation, (b) Image after translation, (c) Image from DREAM-B, (d) Labels, (e) Identification results before translation, and (f) Identification results after translation.
Figure 17.
Harvey-HC images and identification results before and after image translation. (a) Image before translation, (b) Image after translation, (c) Image from DREAM-B, (d) Labels, (e) Identification results before translation, and (f) Identification results after translation.
Figure 18.
The structure of domain adversarial training.
Figure 18.
The structure of domain adversarial training.
Figure 19.
The increasing rate of the IoU for each disaster.
Figure 19.
The increasing rate of the IoU for each disaster.
Figure 20.
The images in the (a–c) subfigures represent the ground truth, the result before domain adversarial training, and the result after domain adversarial training, respectively.
Figure 20.
The images in the (a–c) subfigures represent the ground truth, the result before domain adversarial training, and the result after domain adversarial training, respectively.
Table 1.
Details of the disasters included in this research.
Table 1.
Details of the disasters included in this research.
Disaster Type | Disaster Location | Region | Event Dates |
---|
Tsunami (TM) | Palu | Asia | 18 September 2018 |
Earthquake (EQ) | Mexico | America | 19 September 2017 |
Flood (FD) | Nepal | Asia | July–September 2017 |
| Midwest of USA | America | 3 January–31 May 2019 |
Wildfire (WF) | Portugal | Europe | 7–24 June 12017 |
| Socal | America | 23 July–30 August 2018 |
| Santarosa | | 8–31 October 2017 |
| Woolsey | | 9–28 November 2018 |
Hurricane (HC) | Harvey | America | 17 August–2 September 2017 |
| Florence | | 10–19 September 2018 |
| Michael | | 7–16 October 2018 |
Tornado (TD) | Joplin | | 22 May 2011 |
| Tuscaloosa | | 27 April 2011 |
| Moore | | 20 May 2013 |
Table 2.
Joint damage scale descriptions on a four-level granularity scheme.
Table 2.
Joint damage scale descriptions on a four-level granularity scheme.
Disaster Level | Structure Description |
---|
0 (No Damage) | Undisturbed. No sign of water, structural or shingle damage, or burn marks. |
1 (Minor Damage) | Building partially burnt, water surrounding structure, volcanic flow nearby, roof elements missing, or visible cracks. |
2 (Major Damage) | Partial wall or roof collapse, encroaching volcanic flow, or surrounded by water/mud. |
3 (Destroyed) | Scorched, completely collapsed, partially/completely covered with water/mud, or otherwise no longer present. |
Table 3.
Accuracy evaluation of building identification results.
Table 3.
Accuracy evaluation of building identification results.
Disaster Name | Recall | Precision | IoU | Kappa | Missed Detection Rate | False Detection Rate |
---|
Florence-HC | 0.210 | 0.689 | 0.189 | 0.283 | 70.54% | 17.89% |
Harvey-HC | 0.121 | 0.587 | 0.107 | 0.149 | 80.49% | 25.33% |
Michael-HC | 0.247 | 0.697 | 0.218 | 0.317 | 62.30% | 14.81% |
Mexico-EQ | 0.160 | 0.729 | 0.148 | 0.176 | 66.52% | 10.37% |
Midwest-FD | 0.307 | 0.726 | 0.284 | 0.393 | 60.87% | 5.25% |
Palu-TM | 0.197 | 0.593 | 0.171 | 0.224 | 51.42% | 11.23% |
Santarosa-WF | 0.259 | 0.522 | 0.216 | 0.286 | 37.04% | 9.50% |
Socal-WF | 0.171 | 0.593 | 0.159 | 0.239 | 59.07% | 6.97% |
Joplin-TD | 0.350 | 0.736 | 0.310 | 0.416 | 46.77% | 10.97% |
Moore-TD | 0.625 | 0.852 | 0.556 | 0.666 | 28.15% | 4.48% |
Nepal-FD | 0.125 | 0.646 | 0.114 | 0.179 | 78.27% | 12.29% |
Portugal-WF | 0.197 | 0.904 | 0.193 | 0.292 | 72.46% | 7.65% |
Tuscaloosa-TD | 0.499 | 0.779 | 0.434 | 0.568 | 39.10% | 12.17% |
Woolsey-WF | 0.470 | 0.831 | 0.425 | 0.567 | 36.36% | 9.10% |
Table 4.
Accuracy evaluation of building identification results before and after fine-tuning.
Table 4.
Accuracy evaluation of building identification results before and after fine-tuning.
| | Recall | Precision | IoU | Kappa | Missed Detection Rate | False Detection Rate |
---|
Florence-HC | before | 0.211 | 0.712 | 0.189 | 0.284 | 70.22% | 11.93% |
after | 0.256 | 0.683 | 0.223 | 0.330 | 61.34% | 16.17% |
Harvey-HC | before | 0.132 | 0.662 | 0.119 | 0.166 | 82.10% | 15.60% |
after | 0.270 | 0.671 | 0.229 | 0.312 | 48.29% | 21.73% |
Michael-HC | before | 0.250 | 0.684 | 0.220 | 0.320 | 62.33% | 14.91% |
after | 0.391 | 0.648 | 0.321 | 0.446 | 41.18% | 22.21% |
Mexico-EQ | before | 0.116 | 0.636 | 0.108 | 0.128 | 73.90% | 10.20% |
after | 0.225 | 0.552 | 0.183 | 0.198 | 52.84% | 35.77% |
Midwest-FD | before | 0.317 | 0.722 | 0.292 | 0.405 | 61.28% | 5.53% |
after | 0.547 | 0.645 | 0.424 | 0.557 | 35.26% | 14.91% |
Palu-TM | before | 0.176 | 0.575 | 0.154 | 0.203 | 51.72% | 11.31% |
after | 0.405 | 0.535 | 0.303 | 0.391 | 27.49% | 18.64% |
Santarosa-WF | before | 0.219 | 0.520 | 0.190 | 0.255 | 47.10% | 10.15% |
after | 0.261 | 0.550 | 0.222 | 0.298 | 40.83% | 12.26% |
Socal-WF | before | 0.158 | 0.580 | 0.148 | 0.225 | 67.07% | 8.66% |
after | 0.343 | 0.530 | 0.275 | 0.381 | 50.26% | 15.76% |
Joplin-TD | before | 0.328 | 0.731 | 0.295 | 0.400 | 49.63% | 11.58% |
after | 0.345 | 0.745 | 0.321 | 0.435 | 36.35% | 15.95% |
Moore-TD | before | 0.626 | 0.859 | 0.560 | 0.671 | 28.54% | 4.23% |
after | 0.686 | 0.868 | 0.618 | 0.724 | 23.52% | 6.27% |
Nepal-FD | before | 0.126 | 0.635 | 0.114 | 0.178 | 77.14% | 12.51% |
after | 0.250 | 0.576 | 0.204 | 0.301 | 58.77% | 17.27% |
Portugal-WF | before | 0.189 | 0.892 | 0.185 | 0.280 | 73.71% | 7.99% |
after | 0.383 | 0.805 | 0.354 | 0.487 | 48.53% | 12.86% |
Tuscaloosa-TD | before | 0.502 | 0.780 | 0.438 | 0.572 | 38.61% | 12.18% |
after | 0.621 | 0.737 | 0.510 | 0.642 | 24.39% | 17.94% |
Woolsey-WF | before | 0.487 | 0.823 | 0.437 | 0.579 | 34.51% | 9.53% |
after | 0.672 | 0.705 | 0.517 | 0.650 | 19.88% | 24.68% |
Table 5.
Accuracy evaluation of the building identification results before and after CycleGAN translation.
Table 5.
Accuracy evaluation of the building identification results before and after CycleGAN translation.
| | Recall | Precision | IoU | Kappa | Missed Detection Rate | False Detection Rate |
---|
Florence-HC | before | 0.210 | 0.689 | 0.189 | 0.283 | 70.54% | 17.89% |
after | 0.303 | 0.629 | 0.257 | 0.369 | 60.10% | 24.09% |
Harvey-HC | before | 0.121 | 0.587 | 0.107 | 0.149 | 80.49% | 25.33% |
after | 0.309 | 0.591 | 0.254 | 0.322 | 42.45% | 31.87% |
Michael-HC | before | 0.247 | 0.697 | 0.218 | 0.317 | 62.30% | 14.81% |
after | 0.435 | 0.656 | 0.350 | 0.475 | 38.58% | 19.43% |
Mexico-EQ | before | 0.160 | 0.729 | 0.148 | 0.176 | 66.52% | 10.37% |
after | 0.282 | 0.706 | 0.246 | 0.284 | 49.59% | 11.62% |
Midwest-FD | before | 0.307 | 0.726 | 0.284 | 0.393 | 60.87% | 5.25% |
after | 0.401 | 0.680 | 0.351 | 0.474 | 46.86% | 8.08% |
Palu-TM | before | 0.197 | 0.593 | 0.171 | 0.224 | 51.42% | 11.23% |
after | 0.218 | 0.551 | 0.185 | 0.241 | 46.89% | 10.73% |
Santarosa-WF | before | 0.259 | 0.522 | 0.216 | 0.286 | 37.04% | 9.50% |
after | 0.421 | 0.550 | 0.329 | 0.428 | 24.96% | 13.63% |
Socal-WF | before | 0.171 | 0.593 | 0.159 | 0.239 | 59.07% | 6.97% |
after | 0.172 | 0.557 | 0.161 | 0.234 | 58.67% | 7.02% |
Joplin-TD | before | 0.350 | 0.736 | 0.310 | 0.416 | 46.77% | 10.97% |
after | 0.361 | 0.745 | 0.328 | 0.441 | 52.77% | 8.73% |
Moore-TD | before | 0.625 | 0.852 | 0.556 | 0.666 | 28.15% | 4.48% |
after | 0.705 | 0.798 | 0.593 | 0.701 | 23.99% | 5.87% |
Nepal-FD | before | 0.125 | 0.646 | 0.114 | 0.179 | 78.27% | 12.29% |
after | 0.059 | 0.559 | 0.054 | 0.085 | 91.00% | 12.88% |
Portugal-WF | before | 0.197 | 0.904 | 0.193 | 0.292 | 72.46% | 7.65% |
after | 0.270 | 0.780 | 0.250 | 0.353 | 67.84% | 8.06% |
Tuscaloosa-TD | before | 0.499 | 0.779 | 0.434 | 0.568 | 39.10% | 12.17% |
after | 0.550 | 0.762 | 0.467 | 0.597 | 40.11% | 11.92% |
Woolsey-WF | before | 0.470 | 0.831 | 0.425 | 0.567 | 36.36% | 9.10% |
after | 0.585 | 0.761 | 0.494 | 0.635 | 29.21% | 12.81% |
Table 6.
Accuracy evaluation of building identification results before and after domain adversarial training.
Table 6.
Accuracy evaluation of building identification results before and after domain adversarial training.
| | Recall | Precision | IoU | Kappa | Missed Detection Rate | False Detection Rate |
---|
Florence-HC | before | 0.217 | 0.721 | 0.195 | 0.292 | 70.06% | 19.19% |
after | 0.606 | 0.469 | 0.338 | 0.452 | 36.68% | 71.89% |
Harvey-HC | before | 0.131 | 0.589 | 0.114 | 0.157 | 78.00% | 25.16% |
after | 0.452 | 0.546 | 0.331 | 0.413 | 32.66% | 42.70% |
Michael-HC | before | 0.265 | 0.680 | 0.232 | 0.334 | 60.15% | 15.76% |
after | 0.853 | 0.324 | 0.304 | 0.400 | 7.12% | 62.35% |
Mexico-EQ | before | 0.157 | 0.724 | 0.145 | 0.173 | 67.12% | 11.57% |
after | 0.494 | 0.690 | 0.401 | 0.450 | 32.84% | 11.29% |
Midwest-FD | before | 0.313 | 0.729 | 0.289 | 0.401 | 59.89% | 6.15% |
after | 0.656 | 0.391 | 0.317 | 0.420 | 18.51% | 55.96% |
Palu-TM | before | 0.214 | 0.613 | 0.186 | 0.246 | 49.66% | 10.47% |
after | 0.305 | 0.617 | 0.252 | 0.334 | 41.48% | 13.18% |
Santarosa-WF | before | 0.261 | 0.518 | 0.218 | 0.287 | 36.61% | 11.00% |
after | 0.707 | 0.237 | 0.206 | 0.257 | 8.10% | 70.59% |
Socal-WF | before | 0.172 | 0.577 | 0.159 | 0.237 | 58.38% | 18.14% |
after | 0.398 | 0.511 | 0.288 | 0.397 | 38.85% | 48.64% |
Joplin-TD | before | 0.388 | 0.715 | 0.334 | 0.447 | 44.79% | 12.86% |
after | 0.828 | 0.509 | 0.469 | 0.567 | 13.14% | 41.95% |
Moore-TD | before | 0.635 | 0.841 | 0.561 | 0.673 | 27.70% | 5.10% |
after | 0.863 | 0.789 | 0.702 | 0.793 | 16.97% | 9.45% |
Nepal-FD | before | 0.115 | 0.623 | 0.106 | 0.167 | 79.36% | 13.13% |
after | 0.753 | 0.424 | 0.368 | 0.489 | 14.68% | 40.98% |
Portugal-WF | before | 0.149 | 0.624 | 0.145 | 0.216 | 71.98% | 40.97% |
after | 0.436 | 0.554 | 0.366 | 0.460 | 37.72% | 45.63% |
Tuscaloosa-TD | before | 0.511 | 0.770 | 0.443 | 0.577 | 37.04% | 13.24% |
after | 0.826 | 0.614 | 0.547 | 0.670 | 10.11% | 37.72% |
Woolsey-WF | before | 0.496 | 0.830 | 0.446 | 0.591 | 34.43% | 9.99% |
after | 0.676 | 0.723 | 0.531 | 0.671 | 24.45% | 20.97% |