HRRNet: Hierarchical Refinement Residual Network for Semantic Segmentation of Remote Sensing Images
Abstract
:1. Introduction
- (1)
- The proposed CAM and PRAM sub-modules of HRRNet can fully exploit the feature map position information or the dependence of the context information between channels to enhance the deep expressive ability of features.
- (2)
- Using ResNet50 as a feature extractor, the layered fusion of features extracted to different stages and different scales realizes the refinement of the feature map, and the fusion of multi-scale features also enhances the model’s ability to recognize various types of ground objects and promotes the generalization ability of the model.
- (3)
- By setting different residual structures, the correlation between gradient and loss in the model training process is improved, which enhances the learning ability of the network and alleviates the problem of gradient disappearance.
2. Related Work
2.1. Semantic Segmentation Model with Attention Mechanism
2.2. Semantic Segmentation Model Based on Multi-Branch Feature Fusion
2.3. Semantic Segmentation Model Based on Transformer
3. Proposed Network Model
3.1. Illustration of the Proposed HRRNet
Algorithm 1 Train an optimal HRRNet model. |
Input: Input a set of images and their labels . Output: Get the segmentation results of the test set.
|
3.2. Channel Attention Module
3.3. Pooling Residual Attention Module
4. Experiments and Results
4.1. Dataset
4.2. Dataset Preprocessing and Evaluation Metrics
4.3. Training Details
4.4. Ablation Study
4.5. Quantitative Comparison of Different Models
5. Discussion
5.1. Qualitative Analysis of the Segmentation Results from Ablation Experiments
5.2. Qualitative Analysis of the Segmentation Results from Various Models
6. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Shi, H.; Chen, L.; Bi, F.k.; Chen, H.; Yu, Y. Accurate Urban Area Detection in Remote Sensing Images. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1948–1952. [Google Scholar] [CrossRef]
- Huang, B.; Zhao, B.; Song, Y. Urban land-use mapping using a deep convolutional neural network with high spatial resolution multispectral remote sensing imagery. Remote Sens. Environ. 2018, 214, 73–86. [Google Scholar] [CrossRef]
- Ardila, J.P.; Tolpekin, V.A.; Bijker, W.; Stein, A. Markov-random-field-based super-resolution mapping for identification of urban trees in VHR images. ISPRS J. Photogramm. Remote Sens. 2011, 66, 762–775. [Google Scholar] [CrossRef]
- Anand, T.; Sinha, S.; Mandal, M.; Chamola, V.; Yu, F.R. AgriSegNet: Deep aerial semantic segmentation framework for IoT-assisted precision agriculture. IEEE Sens. J. 2021, 21, 17581–17590. [Google Scholar] [CrossRef]
- Chowdhury, T.; Rahnemoonfar, M. Attention based semantic segmentation on uav dataset for natural disaster damage assessment. Proceedings of IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; pp. 2325–2328. [Google Scholar]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- Voltersen, M.; Berger, C.; Hese, S.; Schmullius, C. Object-based land cover mapping and comprehensive feature calculation for an automated derivation of urban structure types at block level. Remote Sens. Environ. 2014, 154, 192–201. [Google Scholar] [CrossRef]
- Wurm, M.; Taubenböck, H.; Weigand, M.; Schmitt, A. Slum mapping in polarimetric SAR data using spatial features. Remote Sens. Environ. 2017, 194, 190–204. [Google Scholar] [CrossRef]
- Pan, W.; Zhao, Z.; Huang, W.; Zhang, Z.; Fu, L.; Pan, Z.; Yu, J.; Wu, F. Video Moment Retrieval With Noisy Labels. IEEE Trans. Neural Netw. Learn. Syst. 2022; in press. [Google Scholar] [CrossRef]
- Sun, L.; Zhao, G.; Zheng, Y.; Wu, Z. Spectral–Spatial Feature Tokenization Transformer for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
- Ma, L.; Zheng, Y.; Zhang, Z.; Yao, Y.; Fan, X.; Ye, Q. Motion Stimulation for Compositional Action Recognition. IEEE Trans. Circuits Syst. Video Technol. 2022; in press. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
- Noh, H.; Hong, S.; Han, B. Learning deconvolution network for semantic segmentation. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1520–1528. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Chaurasia, A.; Culurciello, E. Linknet: Exploiting encoder representations for efficient semantic segmentation. In Proceedings of the IEEE Visual Communications and Image Processing, St. Petersburg, FL, USA, 10–13 December 2017; pp. 1–4. [Google Scholar]
- Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European Conference on Computer Vision, Munich, Germany, 8–14 September 2018; pp. 801–818. [Google Scholar]
- Zhao, H.; Shi, J.; Qi, X.; Wang, X.; Jia, J. Pyramid Scene Parsing Network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 6230–6239. [Google Scholar]
- Peng, C.; Li, Y.; Jiao, L.; Chen, Y.; Shang, R. Densely based multi-scale and multi-modal fully convolutional networks for high-resolution remote-sensing image semantic segmentation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 2612–2626. [Google Scholar] [CrossRef]
- Jung, H.; Choi, H.S.; Kang, M. Boundary enhancement semantic segmentation for building extraction from remote sensed image. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–12. [Google Scholar] [CrossRef]
- Aryal, J.; Neupane, B. Multi-Scale Feature Map Aggregation and Supervised Domain Adaptation of Fully Convolutional Networks for Urban Building Footprint Extraction. Remote Sens. 2023, 15, 488. [Google Scholar] [CrossRef]
- Li, Y.; Cheng, Z.; Wang, C.; Zhao, J.; Huang, L. RCCT-ASPPNet: Dual-Encoder Remote Image Segmentation Based on Transformer and ASPP. Remote Sens. 2023, 15, 379. [Google Scholar] [CrossRef]
- Fu, L.; Zhang, D.; Ye, Q. Recurrent Thrifty Attention Network for Remote Sensing Scene Recognition. IEEE Trans. Geosci. Remote Sens. 2021, 59, 8257–8268. [Google Scholar] [CrossRef]
- Yin, P.; Zhang, D.; Han, W.; Li, J.; Cheng, J. High-Resolution Remote Sensing Image Semantic Segmentation via Multiscale Context and Linear Self-Attention. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 9174–9185. [Google Scholar] [CrossRef]
- He, X.; Zhou, Y.; Zhao, J.; Zhang, M.; Yao, R.; Liu, B.; Li, H. Semantic segmentation of remote-sensing images based on multiscale feature fusion and attention refinement. IEEE Geosci. Remote Sens. Lett. 2021, 19, 1–5. [Google Scholar] [CrossRef]
- Niu, R.; Sun, X.; Tian, Y.; Diao, W.; Chen, K.; Fu, K. Hybrid multiple attention network for semantic segmentation in aerial images. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–18. [Google Scholar] [CrossRef]
- Hu, J.; Shen, L.; Albanie, S.; Sun, G.; Wu, E. Squeeze-and-Excitation Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 2011–2023. [Google Scholar] [CrossRef] [Green Version]
- Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision, Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Yuan, M.; Ren, D.; Feng, Q.; Wang, Z.; Dong, Y.; Lu, F.; Wu, X. MCAFNet: A Multiscale Channel Attention Fusion Network for Semantic Segmentation of Remote Sensing Images. Remote Sens. 2023, 15, 361. [Google Scholar] [CrossRef]
- Zhang, T.; Zhang, X.; Zhu, P.; Tang, X.; Li, C.; Jiao, L.; Zhou, H. Semantic attention and scale complementary network for instance segmentation in remote sensing images. IEEE Trans. Cybern. 2022, 52, 10999–11013. [Google Scholar] [CrossRef]
- Bai, L.; Lin, X.; Ye, Z.; Xue, D.; Yao, C.; Hui, M. MsanlfNet: Semantic segmentation network with multiscale attention and nonlocal filters for high-resolution remote sensing images. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
- Wang, Z.; Du, L.; Zhang, P.; Li, L.; Wang, F.; Xu, S.; Su, H. Visual attention-based target detection and discrimination for high-resolution SAR images in complex scenes. IEEE Trans. Geosci. Remote Sens. 2017, 56, 1855–1872. [Google Scholar] [CrossRef]
- Wang, Z.; Xin, Z.; Liao, G.; Huang, P.; Xuan, J.; Sun, Y.; Tai, Y. Land-Sea Target Detection and Recognition in SAR Image Based on Non-Local Channel Attention Network. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–16. [Google Scholar] [CrossRef]
- Wang, K.; Du, S.; Liu, C.; Cao, Z. Interior Attention-Aware Network for Infrared Small Target Detection. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–13. [Google Scholar] [CrossRef]
- Sun, L.; He, C.; Zheng, Y.; Wu, Z.; Jeon, B. Tensor Cascaded-Rank Minimization in Subspace: A Unified Regime for Hyperspectral Image Low-Level Vision. IEEE Trans. Image Process. 2022, 32, 100–115. [Google Scholar] [CrossRef]
- Li, X.; Wang, W.; Hu, X.; Yang, J. Selective kernel networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 510–519. [Google Scholar]
- Zhang, X.; Li, L.; Di, D.; Wang, J.; Chen, G.; Jing, W.; Emam, M. SERNet: Squeeze and Excitation Residual Network for Semantic Segmentation of High-Resolution Remote Sensing Images. Remote Sens. 2022, 14, 4770. [Google Scholar] [CrossRef]
- Zhao, D.; Wang, C.; Gao, Y.; Shi, Z.; Xie, F. Semantic segmentation of remote sensing image based on regional self-attention mechanism. IEEE Geosci. Remote Sens. Lett. 2021, 19, 1–5. [Google Scholar] [CrossRef]
- Fu, J.; Liu, J.; Tian, H.; Li, Y.; Bao, Y.; Fang, Z.; Lu, H. Dual attention network for scene segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–21 June 2019; pp. 3146–3154. [Google Scholar]
- Li, Y.; Yao, T.; Pan, Y.; Mei, T. Contextual Transformer Networks for Visual Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 1489–1500. [Google Scholar] [CrossRef] [PubMed]
- Ding, L.; Tang, H.; Bruzzone, L. LANet: Local attention embedding to improve the semantic segmentation of remote sensing images. IEEE Trans. Geosci. Remote Sens. 2020, 59, 426–435. [Google Scholar] [CrossRef]
- Sun, L.; Cheng, S.; Zheng, Y.; Wu, Z.; Zhang, J. SPANet: Successive Pooling Attention Network for Semantic Segmentation of Remote Sensing Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 4045–4057. [Google Scholar] [CrossRef]
- Wang, D.; Zhang, D.; Yang, G.; Xu, B.; Luo, Y.; Yang, X. SSRNet: In-field counting wheat ears using multi-stage convolutional neural network. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–11. [Google Scholar] [CrossRef]
- Chen, J.; Zhu, J.; Guo, Y.; Sun, G.; Zhang, Y.; Deng, M. Unsupervised Domain Adaptation for Semantic Segmentation of High-Resolution Remote Sensing Imagery Driven by Category-Certainty Attention. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–15. [Google Scholar] [CrossRef]
- Zhang, G.; Xue, J.H.; Xie, P.; Yang, S.; Wang, G. Non-local aggregation for RGB-D semantic segmentation. IEEE Signal Process. Lett. 2021, 28, 658–662. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Wang, J.; Sun, K.; Cheng, T.; Jiang, B.; Deng, C.; Zhao, Y.; Liu, D.; Mu, Y.; Tan, M.; Wang, X.; et al. Deep high-resolution representation learning for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 43, 3349–3364. [Google Scholar] [CrossRef] [Green Version]
- Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A.A. Inception-v4, inception-resnet and the impact of residual connections on learning. In Proceedings of the 31st AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; pp. 4278–4284. [Google Scholar]
- Li, R.; Zheng, S.; Zhang, C.; Duan, C.; Su, J.; Wang, L.; Atkinson, P.M. Multiattention network for semantic segmentation of fine-resolution remote sensing images. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–13. [Google Scholar] [CrossRef]
- Zuo, R.; Zhang, G.; Zhang, R.; Jia, X. A Deformable Attention Network for High-Resolution Remote Sensing Images Semantic Segmentation. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–14. [Google Scholar] [CrossRef]
- Liu, R.; Mi, L.; Chen, Z. AFNet: Adaptive fusion network for remote sensing image semantic segmentation. IEEE Trans. Geosci. Remote Sens. 2020, 59, 7871–7886. [Google Scholar] [CrossRef]
- Peng, C.; Zhang, K.; Ma, Y.; Ma, J. Cross fusion net: A fast semantic segmentation network for small-scale semantic information capturing in aerial scenes. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–13. [Google Scholar] [CrossRef]
- Zhao, Q.; Liu, J.; Li, Y.; Zhang, H. Semantic segmentation with attention mechanism for remote sensing images. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–13. [Google Scholar] [CrossRef]
- Ding, L.; Lin, D.; Lin, S.; Zhang, J.; Cui, X.; Wang, Y.; Tang, H.; Bruzzone, L. Looking Outside the Window: Wide-Context Transformer for the Semantic Segmentation of High-Resolution Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 4410313. [Google Scholar] [CrossRef]
- Song, P.; Li, J.; An, Z.; Fan, H.; Fan, L. CTMFNet: CNN and Transformer Multiscale Fusion Network of Remote Sensing Urban Scene Imagery. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–14. [Google Scholar] [CrossRef]
- Zhang, C.; Jiang, W.; Zhang, Y.; Wang, W.; Zhao, Q.; Wang, C. Transformer and CNN Hybrid Deep Neural Network for Semantic Segmentation of Very-High-Resolution Remote Sensing Imagery. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–20. [Google Scholar] [CrossRef]
- He, X.; Zhou, Y.; Zhao, J.; Zhang, D.; Yao, R.; Xue, Y. Swin Transformer Embedding UNet for Remote Sensing Image Semantic Segmentation. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–15. [Google Scholar] [CrossRef]
Model | F1 Score (%) | Indicators | ||||||
---|---|---|---|---|---|---|---|---|
Building | Low-Veg | Surface | Tree | Car | OA (%) | mF1 (%) | mIou (%) | |
Backbone | 91.93 | 79.39 | 88.97 | 87.92 | 73.90 | 87.23 | 84.42 | 73.61 |
Attention Block1 | 94.61 | 82.98 | 92.34 | 89.39 | 88.95 | 90.07 | 89.65 | 81.48 |
Attention Block2 | 94.30 | 83.10 | 92.78 | 89.58 | 88.63 | 90.17 | 89.68 | 81.51 |
Attention Block3 | 94.92 | 82.90 | 92.32 | 89.47 | 89.70 | 90.21 | 89.86 | 81.82 |
HRRNet | 94.92 | 84.41 | 92.52 | 89.89 | 89.97 | 90.59 | 90.34 | 82.57 |
Model | F1 Score (%) | Indicators | ||||||
---|---|---|---|---|---|---|---|---|
Building | Low-Veg | Surface | Tree | Car | OA (%) | mF1 (%) | mIou (%) | |
Backbone | 91.93 | 79.39 | 88.97 | 87.92 | 73.90 | 87.23 | 84.42 | 73.61 |
CAM | 94.51 | 83.19 | 92.53 | 89.58 | 88.26 | 90.18 | 89.61 | 81.41 |
CAM + PRAM | 94.71 | 84.06 | 92.33 | 89.71 | 89.28 | 90.39 | 90.02 | 82.03 |
HRRNet | 94.92 | 84.41 | 92.52 | 89.89 | 89.97 | 90.59 | 90.34 | 82.57 |
Model | F1 Score (%) | Indicators | ||||||
---|---|---|---|---|---|---|---|---|
Building | Low-Veg | Surface | Tree | Car | OA (%) | mF1 (%) | mIou (%) | |
Backbone | 90.07 | 82.05 | 86.28 | 86.13 | 76.18 | 84.61 | 84.14 | 72.91 |
Attention Block1 | 92.32 | 86.05 | 90.01 | 87.47 | 92.50 | 88.25 | 89.67 | 81.37 |
Attention Block2 | 92.11 | 84.93 | 90.99 | 88.66 | 93.96 | 88.57 | 90.13 | 82.18 |
Attention Block3 | 93.90 | 87.85 | 91.21 | 88.89 | 94.10 | 89.72 | 91.19 | 83.91 |
HRRNet | 95.88 | 87.95 | 92.75 | 88.93 | 93.68 | 90.70 | 91.84 | 85.05 |
Model | F1 Score (%) | Indicators | ||||||
---|---|---|---|---|---|---|---|---|
Building | Low-Veg | Surface | Tree | Car | OA (%) | mF1 (%) | mIou (%) | |
Backbone | 90.07 | 82.05 | 86.28 | 86.13 | 76.18 | 84.61 | 84.14 | 72.91 |
CAM | 92.70 | 86.28 | 90.79 | 88.07 | 92.82 | 88.64 | 90.13 | 82.14 |
CAM + PRAM | 95.92 | 87.52 | 92.67 | 88.89 | 93.38 | 90.47 | 91.68 | 84.78 |
HRRNet | 95.88 | 87.95 | 92.75 | 88.93 | 93.68 | 90.70 | 91.84 | 85.05 |
Model | F1 Score (%) | Indicators | ||||||
---|---|---|---|---|---|---|---|---|
Building | Low-Veg | Surface | Tree | Car | OA (%) | mF1 (%) | mIou (%) | |
DeepLabV3+ [16] | 92.51 | 79.18 | 90.62 | 87.26 | 79.25 | 87.71 | 85.77 | 75.50 |
CBAMNet [27] | 91.62 | 80.51 | 89.82 | 87.87 | 74.05 | 87.47 | 84.77 | 74.12 |
SENet [26] | 92.55 | 81.53 | 90.65 | 88.60 | 79.70 | 88.46 | 86.61 | 76.73 |
PSPNet [17] | 92.15 | 81.34 | 90.30 | 88.42 | 78.11 | 88.21 | 86.06 | 75.93 |
SKNet [35] | 94.56 | 83.29 | 92.31 | 89.38 | 86.85 | 90.08 | 89.28 | 80.86 |
DANet [38] | 95.00 | 82.48 | 92.13 | 88.94 | 85.96 | 89.91 | 88.90 | 80.30 |
CoTNet [39] | 94.61 | 82.40 | 92.04 | 89.09 | 86.04 | 89.73 | 88.84 | 80.18 |
LANet [40] | 94.60 | 81.83 | 92.13 | 88.64 | 85.96 | 89.61 | 88.63 | 79.88 |
SPANet [41] | 94.87 | 82.79 | 92.16 | 89.15 | 88.11 | 90.01 | 89.41 | 81.11 |
HRRNet | 94.92 | 84.41 | 92.52 | 89.89 | 89.97 | 90.59 | 90.34 | 82.57 |
Model | F1 Score (%) | Indicators | ||||||
---|---|---|---|---|---|---|---|---|
Building | Low-Veg | Surface | Tree | Car | OA (%) | mF1 (%) | mIou (%) | |
DeepLabV3+ [16] | 92.28 | 83.51 | 89.88 | 85.17 | 89.19 | 87.06 | 88.01 | 78.73 |
CBAMNet [27] | 90.11 | 81.47 | 88.60 | 82.97 | 87.04 | 85.14 | 86.04 | 75.64 |
SENet [26] | 92.59 | 84.44 | 90.53 | 85.24 | 87.06 | 87.63 | 87.97 | 78.67 |
PSPNet [17] | 91.88 | 83.31 | 90.55 | 85.75 | 89.31 | 87.24 | 88.16 | 78.97 |
SKNet [35] | 92.72 | 85.02 | 90.85 | 88.10 | 93.12 | 88.46 | 89.96 | 81.89 |
DANet [38] | 93.76 | 87.59 | 90.69 | 88.96 | 93.09 | 89.47 | 90.82 | 83.27 |
CoTNet [39] | 90.48 | 85.56 | 88.00 | 88.40 | 93.20 | 86.96 | 89.13 | 80.49 |
LANet [40] | 93.04 | 86.48 | 90.58 | 88.73 | 93.10 | 88.87 | 90.39 | 82.56 |
SPANet [41] | 94.90 | 87.09 | 91.61 | 88.85 | 94.08 | 89.73 | 91.31 | 84.15 |
HRRNet | 95.88 | 87.95 | 92.75 | 88.93 | 93.68 | 90.70 | 91.84 | 85.05 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Cheng, S.; Li, B.; Sun, L.; Chen, Y. HRRNet: Hierarchical Refinement Residual Network for Semantic Segmentation of Remote Sensing Images. Remote Sens. 2023, 15, 1244. https://doi.org/10.3390/rs15051244
Cheng S, Li B, Sun L, Chen Y. HRRNet: Hierarchical Refinement Residual Network for Semantic Segmentation of Remote Sensing Images. Remote Sensing. 2023; 15(5):1244. https://doi.org/10.3390/rs15051244
Chicago/Turabian StyleCheng, Shiwei, Baozhu Li, Le Sun, and Yuwen Chen. 2023. "HRRNet: Hierarchical Refinement Residual Network for Semantic Segmentation of Remote Sensing Images" Remote Sensing 15, no. 5: 1244. https://doi.org/10.3390/rs15051244
APA StyleCheng, S., Li, B., Sun, L., & Chen, Y. (2023). HRRNet: Hierarchical Refinement Residual Network for Semantic Segmentation of Remote Sensing Images. Remote Sensing, 15(5), 1244. https://doi.org/10.3390/rs15051244