Skip to Content
Remote SensingRemote Sensing
  • Article
  • Open Access

27 May 2022

Research on Lightweight Disaster Classification Based on High-Resolution Remote Sensing Images

,
,
,
and
1
School of Electronic Information, Wuhan University, Wuhan 473072, China
2
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China
3
School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430079, China
4
Hubei Luojia Laboratory, Wuhan 430079, China
This article belongs to the Special Issue Remote Sensing in Structural Health Monitoring

Abstract

With the increasing frequency of natural disasters becoming, it is very important to classify and identify disasters. We propose a lightweight disaster classification model, which has lower computation and parameter quantities and a higher accuracy than other classification models. For this purpose, this paper specially proposes the SDS-Network algorithm, which is optimized on ResNet, to deal with the above problems of remote sensing images. First, it implements the spatial attention mechanism to improve the accuracy of the algorithm; then, the depth separable convolution is introduced to reduce the number of model calculations and parameters while ensuring the accuracy of the algorithm; finally, the effect of the model is increased by adjusting some hyperparameters. The experimental results show that, compared with the classic AlexNet, ResNet18, VGG16, VGG19, and Densenet121 classification models, the SDS-Network algorithm in this paper has a higher accuracy, and when compared with the lightweight models mobilenet series, shufflenet series, squeezenet series, and mnasnet series, it has lower model complexity and a higher accuracy rate. According to a comprehensive performance comparison of the charts made in this article, it is found that the SDS-Network algorithm is still better than the regnet series algorithm. Furthermore, after verification with a public data set, the SDS-Network algorithm in this paper is found to have a good generalization ability. Thus, we can conclude that the SDS-Network classification model of the algorithm in this paper has a good classification effect, and it is suitable for disaster classification tasks. Finally, it is verified on public data sets that the proposed SDS-Network has good generalization ability and portability.

1. Introduction

In recent years, due to the continuous increase in global greenhouse gas emissions, there have been increasingly more problems, such as melting glaciers and climate warming, and natural disasters have also continued to increase. Natural disasters are mainly classified into four categories: meteorological disasters, geological disasters, biological disasters, and astronomical disasters [1]. This article classifies disasters into car accidents, floods, fires, hurricanes, and earthquakes. Traditionally, these disasters cannot be classified or identified automatically, and they can only be prevented and processed in specific disaster areas [2,3].
In view of the continuous development of remote sensing images, deep learning methods have become increasingly more important in the classification of disasters. For example, Ahmed Ahmouda [4] et al. mapped short- and long-term changes in behavior and tweeting activity in areas affected by natural disasters by analyzing earthquakes in Nepal and central Italy. Scientists have used deep learning technology to study information data, land coverage, floods, etc., in disaster areas, and they found that disasters still have a high research value [5,6,7,8].
Many researchers have proposed their own disaster classification methods. For example, Cheng Ximeng of the China University of Geosciences (Beijing) [9] automatically classified disasters based on high-resolution remote sensing images in combination with earthquake disasters and proposed a rapid earthquake disaster assessment model. To verify the effectiveness of the model, Liu Hongyan [10] et al. classified sudden geological disasters into four categories in their study, and they proposed an improved monitoring and early warning and prediction method of movement distance after instability for emergency prevention. Xu Anxin of Shandong University [11] used SVM to propose a power grid meteorological disaster early warning method based on scene classification and recognition. This method can better extract the meteorological disaster category, and identify and predict power grid faults more accurately to improve the outcome of power grid meteorological disasters, which lays the foundation for improving the warning ability for power grid meteorological disasters. The above methods are all used to classify and recognize a specific disaster, and they have higher requirements for specific scenarios. In view of this, this paper proposes a lightweight disaster classification model (Spatial Depthwise Separable Convolution, SDS-Network) using high-resolution remote sensing images, which may further improve the accuracy rate of disaster classification and reduce the calculations and parameters of the algorithm.

1.1. Remote Sensing Images

Deep learning technology has developed rapidly in recent years, and it is increasingly being combined with remote sensing images. Below is an introduction to remote sensing images and deep-learning-related knowledge.
Remote sensing images are generally obtained from top-to-bottom image information captured by airborne or spaceborne equipment, satellites, and other tools. In traditional remote sensing image classification tasks, the minimum distance method [12], parallelepiped method [13], maximum likelihood method [14], and other methods are more commonly used due to their foreign matters being in the same spectrum, as well as other characteristics. Therefore, the accuracy of classification needs to be further improved. With the development of remote sensing technology, Liu Jiajia [15] et al. elaborated on the classification of urban buildings based on remote sensing images; Li Anqi et al. [16] designed a typical crop classification method based on the U-Net algorithm; and Wang Ziqi [17] et al. adopted a knowledge map to supplement the classification of remote sensing positioning, which reduced the image retrieval time by half. Better ideas for the classification and positioning of remote sensing images have been put forward in the above methods, but they all have specific application scenarios, which are limited in the classification of high-resolution remote sensing images in the disaster classification scenario in this paper. In view of this, this paper proposes the SDS-Network, which can be used in disaster classification tasks.

1.2. Deep Learning

The deep learning method was developed around 2000, and it can better ascertain useful information in original images and process correlations. To date, it has been effectively used in target detection [18], natural language processing [19], speech processing, [20], and semantic analysis [21], and it has greatly promoted the development of artificial intelligence. Moreover, it is mainly integrated into the multi-layer perceptron model [22], deep neural network model [23], and recurrent neural network model [24], including many representatives, such as the deep belief network (DBN), convolution neural network (CNN), and recurrent neural network (RNN). CNN models, including LeNet5 [25], AlexNet [26], VGG [27], GoogleNet [28], ResNet [29], Wide ResNet [30], Xception [31], DenseNet [32], SEnet [33], squeeze [34], MobileNet [35], and Shuffle [36], are mainly used in image classification. Among them, the jump connection of ResNet makes a great contribution, and it solves gradient disappearance and explosion with the deepening of the model [29]; squeeze, Shuffle, and MobileNet are lightweight models, and they solve the problem of operation models on embedded devices such as mobile phones. With the continuous development of CNN, increasingly more classification models are being proposed by research scholars to further meet the needs of life and industrial production. The algorithm proposed in this article makes its own contribution to the scientific community.

4. Conclusions

In recent years, natural disasters have occurred more frequently. In view of this, a disaster classification model was proposed in this paper to solve the low accuracy of current classification models. This article first used the spatial attention mechanism on ResNet to improve the accuracy of the algorithm, then used the Depthwise Separable Convolution to reduce the calculations and parameters of the algorithm, and finally used the hyperparameter fine-tuning method to adjust the model. This paper achieved good results when comparing the model proposed in this paper with classic classification models, and the algorithm proposed in this paper with lightweight algorithms and other newer algorithms. After that, the Grad-CAM visualization method was used to verify the correctness of the model’s recognition, and then the data were published. It was found according to the experiments on Cifar-100 and Caltech that the performance of the algorithm in this paper is still good, which greatly verifies its generalization and portability.

Author Contributions

J.Y.: experiments and methodology; X.M.: funding and methodology; G.H.: methodology; S.L.: methodology; W.G.: methodology. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (Grant No. 42171464, 41971283, 41801261, 41827801, 41801282), the Open Research Fund of National Earth Observation Data Center (grant number NODAOP2021005), and LIESMARS Special Research Funding.

Data Availability Statement

The data used to support the findings of this study are included within the article. W. Xu, W. Wang, N. Wang, and B. Chen, “A New Algorithm for Himawari-8 Aerosol Optical Depth Retrieval by Integrating Regional PM2.5 Concentrations,” in IEEE Transactions on Geoscience and Remote Sensing, doi:10.1109/TGRS.2022.3155503.

Conflicts of Interest

The authors state that they have no conflict of interest.

References

  1. Zhang, H. Research on Data Dependence of Natural Disaster Events. Ph.D. Thesis, University of Chinese Academy of Sciences (Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences), Beijing, China, 2018. [Google Scholar]
  2. Ma, X.; Zhang, H.W.; Han, G.; Mao, F.Y.; Xu, H.; Shi, T.; Hu, H.; Sun, H.; Gong, W. A Regional Spatiotemporal Downscaling Method for CO₂ Columns. IEEE Trans. Geosci. Remote. Sens. 2021, 59, 10. [Google Scholar] [CrossRef]
  3. Xu, W.; Wang, W.; Wang, N.; Chen, B. A New Algorithm for Himawari-8 Aerosol Optical Depth Retrieval by Integrating Regional PM2.5 Concentrations. IEEE Trans. Geosci. RemoteSens. 2022, 60, 4106711. [Google Scholar] [CrossRef]
  4. Ahmouda, A.; Hochmair, H.H.; Cvetojevic, S. Analyzing the effect of earthquakes on OpenStreetMap contribution patterns and tweeting activities. Geo-Spat. Inf. Sci. 2018, 21, 195–212. [Google Scholar] [CrossRef] [Green Version]
  5. Li, G.; Zhao, J.; Murray, V.; Song, C.; Zhang, L. Gap analysis on open data interconnectivity for disaster risk research. Geo-Spat. Inf. Sci. 2019, 22, 45–58. [Google Scholar] [CrossRef] [Green Version]
  6. Zahra, K.; Ostermann, F.O.; Purves, R.S. Geographic variability of Twitter usage characteristics during disaster events. Geo-Spat. Inf. Sci. 2017, 20, 231–240. [Google Scholar] [CrossRef] [Green Version]
  7. Ding, Y.; Zhu, Q.; Lin, H. An integrated virtual geographic environmental simulation framework: A case study of flood disaster simulation. Geo-Spat. Inf. Sci. 2014, 17, 190–200. [Google Scholar] [CrossRef] [Green Version]
  8. Liu, B.; Ma, X.; Ma, Y.; Li, H.; Jin, S.; Fan, R.; Gong, W. The relationship between atmospheric boundary layer and temperature inversion layer and their aerosol capture capabilities. Atmos. Res. 2022, 271, 106121. [Google Scholar] [CrossRef]
  9. Cheng, X. Research on Automatic Classification Technology of Disaster Targets Based on High-Resolution Remote Sensing Images. Ph.D. Thesis, China University of Geosciences (Beijing), Beijing, China, 2016. [Google Scholar]
  10. Liu, H.; Zhang, X.; Li, J.; Qi, X.; Chen, H. Classification method of slope (landslide) based on emergency prevention and control of sudden geological disasters. J. Disaster Prev. Mitig. Eng. 2021, 41, 193–202, 237. [Google Scholar] [CrossRef]
  11. Xu, A. Power Grid Weather Disaster Warning Method Based on Scene Classification and Recognition. Ph.D. Thesis, Shandong University, Shandong, China, 2021. [Google Scholar] [CrossRef]
  12. Dong, Y.; Liu, Q.; Du, B.; Zhang, L. Weighted Feature Fusion of Convolutional Neural Network and Graph Attention Network for Hyperspectral Image Classification. IEEE Trans. Image Processing 2022, 31, 1559–1572. [Google Scholar] [CrossRef]
  13. Elfarra, F.G.; Calin, M.A.; Parasca, S.V. Computer-aided detection of bone metastasis in bone scintigraphy images using parallelepiped classification method. Ann. Nucl. Med. 2019, 33, 866–874. [Google Scholar] [CrossRef]
  14. Angkunsit, A.; Suntornchost, J. Adjusted maximum likelihood method for multivariate Fay–Herriot model. J. Stat. Plan. Inference 2022, 219, 231–249. [Google Scholar] [CrossRef]
  15. Jiajia, L.; Zhihui, L.; Feng, L. Classification of urban buildings based on remote sensing images. J. Nat. Disasters 2021, 30, 61–66. [Google Scholar] [CrossRef]
  16. Anqi, L.; Li, M.; Yu, H.; Zhang, H. Improved U-Net Algorithm in the Classification of Typical Crops in Remote Sensing Images. Infrared and Laser Engineering: 1-9[2022-1-4]. Available online: http://kns.cnki.net/kcms/detail/12.1261.TN.20211228.1518.016.html (accessed on 12 April 2022).
  17. Wang, Z.; Li, H. Remote sensing image retrieval and positioning method based on knowledge graph. Shanghai Aerosp. 2021, 38, 93–99. [Google Scholar] [CrossRef]
  18. Wilpshaar, M.; de Bruin, G.; Versteijlen, N.; van der Valk, K.; Griffioen, J. Comment on “Greenhouse gas emissions from marine decommissioned hydrocarbon wells: Leakage detection, monitoring and mitigation strategies” by Christoph Böttner, Matthias Haeckel, Mark Schmidt, Christian Berndt, Lisa Vielstädte, Jakob A. Kutsch, Jens Karstens & Tim Weiß. Int. J. Greenh. Gas Control. 2021, 110, 2021–2025. [Google Scholar]
  19. Bragg, J.; Cohan, A.; Lo, K.; Beltagy, I. Flex: Unifying evaluation for few-shot nlp. arXiv 2021, arXiv:2107.07170v2. [Google Scholar]
  20. Sullivan, A.E.; Crosse, M.J.; Di Liberto, G.M.; de Cheveigné, A.; Larol, E. Neurophysiological indices of audiovisual speech processing reveal a hierarchy of multisensory integration effects. J. Neurosci. 2021, 41, 4991–5003. [Google Scholar] [CrossRef]
  21. Maulud, D.; Zeebaree, S.; Jacksi, K.; Sadeeq, M.; Sharif, K. State of art for semantic analysis of natural language processing. Qubahan Acad. J. 2021, 1, 21–28. [Google Scholar] [CrossRef]
  22. Ahmadlou, M.; Al-Fugara, A.; Al-Shabeeb, A.R.; Arora, A.; Al-Adamat, R.; Pham, Q.B.; Al-Ansari, N.; Linh, N.T.T.; Sajedi, H. Flood susceptibility mapping and assessment using a novel deep learning model combining multilayer perceptron and autoencoder neural networks. J. Flood Risk Manag. 2021, 14, e12683. [Google Scholar] [CrossRef]
  23. Srinidhi, C.L.; Ciga, O.; Martel, A.L. Deep neural network models for computational histopathology: A survey. Med. Image Anal. 2021, 67, 101813. [Google Scholar] [CrossRef]
  24. Lin, J.C.W.; Shao, Y.; Djenouri, Y.; Yun, U. ASRNN: A recurrent neural network with an attention model for sequence labeling. Knowl. Based Syst. 2021, 212, 106548. [Google Scholar] [CrossRef]
  25. El-Sawy, A.; Hazem, E.L.B.; Loey, M. CNN for handwritten arabic digits recognition based on LeNet-5. In Proceedings of the International Conference on Advanced Intelligent Systems and Informatics; Springer: Cham, Switzerland, 2016; pp. 566–575. [Google Scholar]
  26. Alom, M.Z.; Taha, T.M.; Yakopcic, C.; Westberg, S.; Sidike, P.; Nasrin, M.S.; Van Esesn, B.C.; Awwal, A.A.S.; Asari, V.K. The history began from alexnet: A comprehensive survey on deep learning approaches. arXiv 2018, arXiv:1803.1164. [Google Scholar]
  27. Sengupta, A.; Ye, Y.; Wang, R.; Liu, C.; Roy, K. Going deeper in spiking neural networks: VGG and residual architectures. Front. Neurosci. 2019, 13, 95. [Google Scholar] [CrossRef] [PubMed]
  28. Ballester, P.; Araujo, R.M. On the performance of GoogLeNet and AlexNet applied to sketches. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence at the Phoenix Convention Center, Phoenix, AZ, USA, 12–17 February 2016. [Google Scholar]
  29. Targ, S.; Almeida, D.; Lyman, K. Resnet in resnet: Generalizing residual architectures. arXiv 2016, arXiv:1603.8029. [Google Scholar]
  30. Zagoruyko, S.; Komodakis, N. Wide residual networks. arXiv 2016, arXiv:1605.7146. [Google Scholar]
  31. Chollet, F. Xception: Deep learning with depthwise separable convolutions. arXiv 2017, arXiv:1610.02357. [Google Scholar]
  32. Gao, X.; Chen, T.; Niu, R.; Plaza, A. Recognition and mapping of landslide using a fully convolutional DenseNet and influencing factors. IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 2021, 14, 7881–7894. [Google Scholar] [CrossRef]
  33. Hermine, O.; Mariette, X.; Tharaux, P.L.; Resche-Rigon, M.; Porcher, R.; Ravaud, P. Effect of tocilizumab vs usual care in adults hospitalized with COVID-19 and moderate or severe pneumonia: A randomized clinical trial. JAMA Intern. Med. 2021, 181, 32–40. [Google Scholar] [CrossRef]
  34. Zhang, H.; Ma, J. SDNet: A versatile squeeze-and-decomposition network for real-time image fusion. Int. J. Comput. Vis. 2021, 129, 2761–2785. [Google Scholar] [CrossRef]
  35. Iandola, F.N.; Han, S.; Moskewicz, M.W.; Ashraf, K.; Dally, W.J.; Keutzer, K. SqueezeNet: AlexNet-level Accuracy with 50x fewer parameters and <0.5 MB model size. arXiv 2016, arXiv:1602.7360. [Google Scholar]
  36. Zhang, X.; Zhou, X.; Lin, M.; Sun, J. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 6848–6856. [Google Scholar]
  37. Zhong, Y.; Deng, W.; Hu, J.; Zhao, D.; Li, X.; Wen, D. SFace: Sigmoid-constrained hypersphere loss for robust face recognition. IEEE Trans. Image Process. 2021, 30, 2587–2598. [Google Scholar] [CrossRef]
  38. Tan, M.; Le, Q.V. Mixconv: Mixed depthwise convolutional kernels. arXiv 2019, arXiv:1907.9595. [Google Scholar]
  39. Hua, B.S.; Tran, M.K.; Yeung, S.K. Pointwise convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 984–993. [Google Scholar]
  40. Garcia-Orellana, J.; Rodellas, V.; Tamborski, J.; Diego-Feliu, M.; van Beek, P.; Weinstein, Y.; Charette, M.; Alorda-Kleinglass, A.; Michael, M.H.; Stieglitz, T. Radium isotopes as submarine groundwater discharge (SGD) tracers: Review and recommendations. Earth-Sci. Rev. 2021, 220, 103681. [Google Scholar] [CrossRef]
  41. Zhang, G.; Kenta, N.; Kleijn, W.B. Extending AdamW by Leveraging Its Second Moment and Magnitude. arXiv 2021, arXiv:2112.6125. [Google Scholar]
  42. Hanin, B. Universal function approximation by deep neural nets with bounded width and relu activations. Mathematics 2019, 7, 992. [Google Scholar] [CrossRef] [Green Version]
  43. Carrat, F.; Fontaine, H.; Dorival, C.; Simony, M.; Diallo, A.; Hezode, C.; De Ledinghen, V.; Larrey, D.; Haour, G.; Bronowicki, J.P.; et al. Clinical outcomes in patients with chronic hepatitis C after direct-acting antiviral treatment: A prospective cohort study. Lancet 2019, 393, 1453–1464. [Google Scholar] [CrossRef]
  44. Tan, M.; Chen, B.; Pang, R.; Vasudevan, V.; Sandler, M.; Howard, A.; Le, Q.V. Mnasnet: Platform-aware neural architecture search for mobile. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 2820–2828. [Google Scholar]
  45. Tan, M.; Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the 34th International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 6105–6114. [Google Scholar]
  46. Radosavovic, I.; Kosaraju, R.P.; Girshick, R.; He, K.; Doll’ar, P. Designing network design spaces. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 10428–10436. [Google Scholar]
  47. Zhou, B.; Khosla, A.; Lapedriza, A. Learning deep features for discriminative localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2921–2929. [Google Scholar]
  48. Selvaraju, R.R.; Das, A.; Vedantam, R.; Cogswell, M.; Parikh, D.; Batra, D. Grad-CAM: Why did you say that? arXiv 2016, arXiv:1611.7450. [Google Scholar]
  49. Smith, R.J.; Amaral, R.; Heywood, M.I. Evolving simple solutions to the CIFAR-10 benchmark using tangled program graphs. In Proceedings of the 2021 IEEE Congress on Evolutionary Computation (CEC), Kraków, Poland, 28 June–1 July 2021; pp. 2061–2068. [Google Scholar]
  50. Bansal, M.; Kumar, M.; Sachdeva, M.; Mittal, A. Transfer learning for image classification using VGG19: Caltech-101 image data set. J. Ambient. Intell. Humaniz. Comput. 2021, 1–12. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.