Boosting Urban Openspace Mapping with the Enhancement Feature Fusion of Object Geometry Prior Information from Vision Foundation Model
Abstract
:1. Introduction
- High interclass similarities. Different Urban Objects (UOs) often exhibit similar visual characteristics, such as color, spectral properties, geometry, and texture, making their accurate identification challenging. For instance, outdoor parking lots and roads share highly similar visual appearances. Both UO categories typically consist of the same material, often cement, resulting in similar colors and spectral information. Additionally, parking lots and roads often exhibit an analogous object composition and contextual elements, such as vehicles and white markings. These similarities make distinguishing between these UO categories particularly difficult.
- Complex environment surroundings. Urban areas, when viewed at high resolutions, are fragmented and heterogeneous, creating intricate spatial relationships between various urban objects. This complexity in spatial relationships leads to challenging environmental conditions, including shadows and mutual occlusion, which further complicate the accurate identification of pixels associated with UOs. For example, parking lots are often located near tall buildings or street trees. Satellite imagery, typically captured from a bird’s-eye perspective, results in these areas being prone to occlusion and shadowing, further hindering object detection.
- Scale variations. Scale variation in urban object identification can arise from two primary factors. The first type refers to objects of the same physical size appearing at different scales within an image due to perspective effects and the camera’s distance from the scene. The second type is related to the actual differences in physical size between objects. In the context of UOs, the latter is more significant. For instance, an outdoor gymnasium typically occupies a much larger area than a parking lot. Additionally, large parking lots often have a significantly greater scale compared to smaller, roadside parking areas.
- (1)
- To address the issues of high inter-class similarity, complex environment surroundings, and scale variations, the UOSAM model is proposed, which integrates multi-scale semantic information and ubiquitous objects to enhance the performance of UO mapping.
- (2)
- A pyramid Transformer encoder is utilized to extract feature pyramids at different scales, capturing multi-scale semantic context and compensating for scale variations in UO mapping.
- (3)
- The Segment Anything model leverages geometry prior information about ubiquitous objects to capture geometric details of UO, resulting in more structured and accurate UO mapping.
2. Materials
2.1. Study Area
2.2. Data Sources
2.3. Dataset Generation and Visualization
3. Methods
- The SPFM. The SPFM uses a hierarchical structure, where each layer gradually reduces the spatial resolution through downsampling operations while increasing the number of channels. This allows the model to capture image information at different scales and preserve important global and local features. SPFM uses overlapping block embedding layers and four Transformer blocks to hierarchically extract spatial features of different scales from the input remote sensing image. The features at each scale are converted to the required embedding dimension through MLP, then upsampled and fused. The fused features are passed through a convolutional layer to obtain semantic features.
- The GFM. GFM uses the Segment Anything Model, a basic model for image segmentation, which can effectively capture highly complex geometric features. GFM first uses a Vision Transformer for image encoding and handles high-resolution inputs while ensuring scalability. The Prompt Encoder in the SAM model handles two types of prompts—sparse prompts and dense prompts—and can convert multiple prompts into a unified feature representation. GFM does not use prompts. GFM’s Mask Decoder uses a modified Transformer decoder to update the embedding information through the self-attention and cross-attention mechanisms between prompts and image embeddings. Then, the final mask is generated through upsampling and dynamic linear classifier.
3.1. Semantic Pyramid Feature Module
3.2. Geometric Feature Module
3.3. Loss Function
4. Results
4.1. Experimental Setup
4.2. Ablation Study
4.3. The Visualization and Analysis of Class-Specific Attention Maps
4.4. Semantic Segmentation Results of Ten Major Cities
5. Discussion
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Thwaites, K.; Helleur, E.; Simkins, I. Restorative urban open space: Exploring the spatial configuration of human emotional fulfilment in urban open space. Landsc. Res. 2005, 30, 525–547. [Google Scholar]
- Byrne, J.; Sipe, N. Green and Open Space Planning for Urban Consolidation—A Review of the Literature and Best Practice. Issues Paper 11. 2010. Available online: https://core.ac.uk/download/pdf/143882947.pdf (accessed on 25 July 2024).
- World Bank. Available online: https://data.worldbank.org/indicator/SP.URB.TOTL.IN.ZS (accessed on 25 July 2024).
- United Nations Human Settlements Programme (UN-Habitat). Available online: https://unhabitat.org/wcr/ (accessed on 25 July 2024).
- Topcu, U. Reflections of gender on the urban green space. Archnet-IJAR Int. J. Archit. Res. 2020, 14, 70–76. [Google Scholar]
- Yung, E.H.; Conejos, S.; Chan, E.H. Public open spaces planning for the elderly: The case of dense urban renewal districts in Hong Kong. Land Use Policy 2016, 59, 1–11. [Google Scholar]
- Wortzel, J.D.; Wiebe, D.J.; DiDomenico, G.E.; Visoki, E.; South, E.; Tam, V.; Greenberg, D.M.; Brown, L.A.; Gur, R.C.; Gur, R.E.; et al. Association between urban greenspace and mental wellbeing during the COVID-19 pandemic in a US cohort. Front. Sustain. Cities 2021, 3, 686159. [Google Scholar]
- Høj, S.B.; Paquet, C.; Caron, J.; Daniel, M. Relative ‘greenness’ and not availability of public open space buffers stressful life events and longitudinal trajectories of psychological distress. Health Place 2021, 68, 102501. [Google Scholar] [PubMed]
- Crossley, A.J.; Russo, A. Has the pandemic altered public perception of how local green spaces affect quality of life in the United Kingdom? Sustainability 2022, 14, 7946. [Google Scholar] [CrossRef]
- Alberti, M. Maintaining ecological integrity and sustaining ecosystem function in urban areas. Curr. Opin. Environ. Sustain. 2010, 2, 178–184. [Google Scholar]
- Han, W.; Wang, L.; Wang, Y. A novel framework for leveraging geological environment big data to assess Sustainable Development Goals. Innovation 2025, 3, 100122. [Google Scholar]
- Tao, L.; Xu, Y.; He, K.; Ma, X.; Wang, L. Pan-spatial Earth information system: A new methodology for cognizing the earth system. Innovation 2025, 6, 100770. [Google Scholar] [CrossRef]
- Hewitt, C.N.; Ashworth, K.; MacKenzie, A.R. Using green infrastructure to improve urban air quality (GI4AQ). Ambio 2020, 49, 62–73. [Google Scholar]
- Bigazzi, A.Y.; Rouleau, M. Can traffic management strategies improve urban air quality? A review of the evidence. J. Transp. Health 2017, 7, 111–124. [Google Scholar]
- Elghonaimy, I.; Mohammed, W.E. Urban heat islands in Bahrain: An urban perspective. Buildings 2019, 9, 96. [Google Scholar] [CrossRef]
- Deilami, K.; Kamruzzaman, M.; Liu, Y. Urban heat island effect: A systematic review of spatio-temporal factors, data, methods, and mitigation measures. Int. J. Appl. Earth Obs. Geoinf. 2018, 67, 30–42. [Google Scholar]
- Wang, L.; Ma, Y.; Zomaya, A.Y.; Ranjan, R.; Chen, D. A parallel file system with application-aware data layout policies for massive remote sensing image processing in digital earth. IEEE Trans. Parallel Distrib. Syst. 2014, 26, 1497–1508. [Google Scholar]
- Li, L.; Liu, P.; Wu, J.; Wang, L.; He, G. Spatiotemporal remote-sensing image fusion with patch-group compressed sensing. IEEE Access 2020, 8, 209199–209211. [Google Scholar]
- Wang, L.; Zuo, B.; Le, Y.; Chen, Y.; Li, J. Penetrating remote sensing: Next-generation remote sensing for transparent earth. Innovation 2023, 4, 100519. [Google Scholar]
- Zhao, T.; Wang, S.; Ouyang, C.; Chen, M.; Liu, C.; Zhang, J.; Yu, L.; Wang, F.; Xie, Y.; Li, J.; et al. Artificial intelligence for geoscience: Progress, challenges, and perspectives. Innovation 2024, 5, 100691. [Google Scholar]
- Song, W.; Liu, P.; Wang, L. Sparse representation-based correlation analysis of non-stationary spatiotemporal big data. Int. J. Digit. Earth 2016, 9, 892–913. [Google Scholar]
- Fan, R.; Li, J.; Song, W.; Han, W.; Yan, J.; Wang, L. Urban informal settlements classification via a transformer-based spatial-temporal fusion network using multimodal remote sensing and time-series human activity data. Int. J. Appl. Earth Obs. Geoinf. 2022, 111, 102831. [Google Scholar]
- Fan, R.; Feng, R.; Wang, L.; Yan, J.; Zhang, X. Semi-MCNN: A semisupervised multi-CNN ensemble learning method for urban land cover classification using submeter HRRS images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 4973–4987. [Google Scholar]
- Li, W.; Wu, B.; Fan, R.; Tian, F.; Zhang, M.; Zhou, Z.; Hu, J.; Feng, R.; Wu, F. Multiclass Crop Interpretation via a Lightweight Attentive Feature Fusion Network Using Vehicle-View Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2025, 18, 496–509. [Google Scholar] [CrossRef]
- Han, W.; Li, J.; Wang, S.; Zhang, X.; Dong, Y.; Fan, R.; Zhang, X.; Wang, L. Geological remote sensing interpretation using deep learning feature and an adaptive multisource data fusion network. IEEE Trans. Geosci. Remote Sens. 2022, 60, 4510314. [Google Scholar]
- He, K.; Dong, J.; Ma, H.; Cai, Y.; Feng, R.; Dong, Y.; Wang, L. Remote sensing image interpretation of geological lithology via a sensitive feature self-aggregation deep fusion network. Int. J. Appl. Earth Obs. Geoinf. 2025, 137, 104384. [Google Scholar] [CrossRef]
- Zheng, Z.; Ermon, S.; Kim, D.; Zhang, L.; Zhong, Y. Changen2: Multi-temporal remote sensing generative change foundation model. IEEE Trans. Pattern Anal. Mach. Intell. 2025, 47, 725–741. [Google Scholar]
- Zheng, Z.; Zhong, Y.; Zhang, L.; Burke, M.; Lobell, D.B.; Ermon, S. Towards transferable building damage assessment via unsupervised single-temporal change adaptation. Remote Sens. Environ. 2024, 315, 114416. [Google Scholar]
- Zhong, Y.; Yan, B.; Yi, J.; Yang, R.; Xu, M.; Su, Y.; Zheng, Z.; Zhang, L. Global urban high-resolution land-use mapping: From benchmarks to multi-megacity applications. Remote Sens. Environ. 2023, 298, 113758. [Google Scholar]
- Ma, H.; Yang, X.; Fan, R.; Han, W.; He, K.; Wang, L. Refined Water-Body Types Mapping Using a Water-Scene Enhancement Deep Models by Fusing Optical and SAR Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 2024, 17, 17430–17441. [Google Scholar]
- He, K.; Zhang, Z.; Dong, Y.; Cai, D.; Lu, Y.; Han, W. Improving Geological Remote Sensing Interpretation via a Contextually Enhanced Multiscale Feature Fusion Network. IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 2024, 17, 6158–6173. [Google Scholar]
- Lu, Y.; He, K.; Xu, H.; Dong, Y.; Han, W.; Wang, L.; Liang, D. Remote-sensing interpretation for soil elements using adaptive feature fusion network. IEEE Trans. Geosci. Remote Sens. 2023, 61, 4505515. [Google Scholar] [CrossRef]
- Pan, D.; Zhang, M.; Zhang, B. A generic FCN-based approach for the road-network extraction from VHR remote sensing images–using openstreetmap as benchmarks. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 2662–2673. [Google Scholar]
- Chen, D.; Zhong, Y.; Zheng, Z.; Ma, A.; Lu, X. Urban road mapping based on an end-to-end road vectorization mapping network framework. ISPRS J. Photogramm. Remote Sens. 2021, 178, 345–365. [Google Scholar] [CrossRef]
- Lu, X.; Zhong, Y.; Zheng, Z.; Liu, Y.; Zhao, J.; Ma, A.; Yang, J. Multi-scale and multi-task deep learning framework for automatic road extraction. IEEE Trans. Geosci. Remote Sens. 2019, 57, 9362–9377. [Google Scholar] [CrossRef]
- Lu, X.; Weng, Q. Multi-LoRA Fine-Tuned Segment Anything Model for Urban Man-Made Object Extraction. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5637519. [Google Scholar] [CrossRef]
- Chen, W.; Zhou, G.; Liu, Z.; Li, X.; Zheng, X.; Wang, L. NIGAN: A framework for mountain road extraction integrating remote sensing road-scene neighborhood probability enhancements and improved conditional generative adversarial network. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5626115. [Google Scholar] [CrossRef]
- Cheng, L.; Wang, L.; Feng, R.; Yan, J. Remote sensing and social sensing data fusion for fine-resolution population mapping with a multimodel neural network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 5973–5987. [Google Scholar] [CrossRef]
- Zhang, X.; Chen, Y.; Le, Y.; Zhang, D.; Yan, Q.; Dong, Y.; Han, W.; Wang, L. Nearshore bathymetry based on ICESat-2 and multispectral images: Comparison between Sentinel-2, Landsat-8, and testing Gaofen-2. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 2449–2462. [Google Scholar] [CrossRef]
- Sanlang, S.; Cao, S.; Du, M.; Mo, Y.; Chen, Q.; He, W. Integrating aerial LiDAR and very-high-resolution images for urban functional zone mapping. Remote Sens. 2021, 13, 2573. [Google Scholar] [CrossRef]
- Zhou, W.; Ming, D.; Lv, X.; Zhou, K.; Bao, H.; Hong, Z. SO–CNN based urban functional zone fine division with VHR remote sensing image. Remote Sens. Environ. 2020, 236, 111458. [Google Scholar] [CrossRef]
- Fan, R.; Li, F.; Han, W.; Yan, J.; Li, J.; Wang, L. Fine-scale urban informal settlements mapping by fusing remote sensing images and building data via a transformer-based multimodal fusion network. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5630316. [Google Scholar] [CrossRef]
- Fan, R.; Niu, H.; Xu, Z.; Chen, J.; Feng, R.; Wang, L. Refined Urban Informal Settlements’ Mapping at Agglomeration Scale With the Guidance of Background Knowledge From Easy-Accessed Crowdsourced Geospatial Data. IEEE Trans. Geosci. Remote Sens. 2025, 63, 4401716. [Google Scholar] [CrossRef]
- Niu, H.; Fan, R.; Chen, J.; Xu, Z.; Feng, R. Urban informal settlements interpretation via a novel multi-modal Kolmogorov–Arnold fusion network by exploring hierarchical features from remote sensing and street view images. Sci. Remote Sens. 2025, 11, 100208. [Google Scholar] [CrossRef]
- Wang, S.; Han, W.; Zhang, X.; Li, J.; Wang, L. Geospatial remote sensing interpretation: From perception to cognition. Innov. Geosci. 2024, 2, 100056. [Google Scholar]
- Blaschke, T. Object based image analysis for remote sensing. ISPRS J. Photogramm. Remote Sens. 2010, 65, 2–16. [Google Scholar]
- Feng, Q.; Liu, J.; Gong, J. UAV remote sensing for urban vegetation mapping using random forest and texture analysis. Remote Sens. 2015, 7, 1074–1094. [Google Scholar] [CrossRef]
- Zhao, B.; Zhong, Y.; Zhang, L. A spectral–structural bag-of-features scene classifier for very high spatial resolution remote sensing imagery. ISPRS J. Photogramm. Remote Sens. 2016, 116, 73–85. [Google Scholar] [CrossRef]
- Pan, L.; Gu, L.; Ren, R.; Yang, S. Land cover classification based on machine learning using UAV multi-spectral images. In Proceedings of the Earth Observing Systems XXV, SPIE, Online, 24 August–4 September 2020; Volume 11501, pp. 297–308. [Google Scholar]
- Zhang, C.; Sargent, I.; Pan, X.; Li, H.; Gardiner, A.; Hare, J.; Atkinson, P.M. An object-based convolutional neural network (OCNN) for urban land use classification. Remote Sens. Environ. 2018, 216, 57–70. [Google Scholar]
- Jin, B.; Ye, P.; Zhang, X.; Song, W.; Li, S. Object-oriented method combined with deep convolutional neural networks for land-use-type classification of remote sensing images. J. Indian Soc. Remote Sens. 2019, 47, 951–965. [Google Scholar]
- Huang, J.; Weng, L.; Chen, B.; Xia, M. DFFAN: Dual function feature aggregation network for semantic segmentation of land cover. ISPRS Int. J. Geo-Inf. 2021, 10, 125. [Google Scholar] [CrossRef]
- Xu, W.; Deng, X.; Guo, S.; Chen, J.; Sun, L.; Zheng, X.; Xiong, Y.; Shen, Y.; Wang, X. High-resolution u-net: Preserving image details for cultivated land extraction. Sensors 2020, 20, 4064. [Google Scholar] [CrossRef]
- Men, G.; He, G.; Wang, G. Concatenated residual attention unet for semantic segmentation of urban green space. Forests 2021, 12, 1441. [Google Scholar] [CrossRef]
- Shi, Q.; Liu, M.; Marinoni, A.; Liu, X. UGS-1m: Fine-grained urban green space mapping of 34 major cities in China based on the deep learning framework. Earth Syst. Sci. Data Discuss. 2022, 2022, 1–23. [Google Scholar]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar]
- Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 834–848. [Google Scholar] [CrossRef] [PubMed]
- Xiao, T.; Liu, Y.; Zhou, B.; Jiang, Y.; Sun, J. Unified perceptual parsing for scene understanding. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 418–434. [Google Scholar]
- Lee, J.; Kim, D.; Ponce, J.; Ham, B. Sfnet: Learning object-aware semantic correspondence. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 2278–2287. [Google Scholar]
- Huang, S.; Lu, Z.; Cheng, R.; He, C. FaPN: Feature-aligned pyramid network for dense image prediction. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 864–873. [Google Scholar]
- Xie, E.; Wang, W.; Yu, Z.; Anandkumar, A.; Alvarez, J.M.; Luo, P. SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers. arXiv 2021, arXiv:2105.15203. [Google Scholar]
- Stefanski, J.; Mack, B.; Waske, B. Optimization of object-based image analysis with random forests for land cover mapping. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 2492–2504. [Google Scholar] [CrossRef]
- Çömert, R.; Matcı, D.K.; Avdan, U. Object based burned area mapping with random forest algorithm. Int. J. Eng. Geosci. 2019, 4, 78–87. [Google Scholar] [CrossRef]
IoU (%) | ||||||
---|---|---|---|---|---|---|
Model | GS | OSF | TH | WB | NOS | mIoU |
OBRF | 50.33 | 6.15 | 10.67 | 56.57 | 57.58 | 36.26 |
OBRF-HL | 48.26 | 12.75 | 18.66 | 73.89 | 54.37 | 41.59 |
UOSAM | 72.56 | 61.01 | 51.65 | 85.35 | 75.7 | 69.25 |
Acc (%) | ||||||
---|---|---|---|---|---|---|
Model | GS | OSF | TH | WB | NOS | OA |
OBRF | 66.95 | 6.55 | 13.01 | 62.95 | 83.07 | 66.74 |
OBRF-HL | 57.87 | 26.55 | 42.15 | 80.33 | 72.04 | 64.47 |
UOSAM | 83.03 | 69.68 | 65.78 | 92.1 | 88.34 | 84.15 |
Iou (%) | ||||||
---|---|---|---|---|---|---|
Model | GS | OSF | TH | WB | NOS | mIoU |
FCN | 71.72 | 57.98 | 51.08 | 84.76 | 75.08 | 68.12 |
FPN | 66.87 | 39.65 | 41.1 | 78.99 | 69.36 | 59.19 |
DeepLabV3 | 72.19 | 60.86 | 51.49 | 84.43 | 75.51 | 68.89 |
FaPN | 67.82 | 53.02 | 44.09 | 80.18 | 72.47 | 63.52 |
SFNet | 66.87 | 47.68 | 42.85 | 78.17 | 72.04 | 61.52 |
UperNet | 66.98 | 46.51 | 43.3 | 79.88 | 71.1 | 61.55 |
SegFormer | 72.42 | 60.4 | 50.64 | 85.51 | 74.98 | 68.79 |
UOSAM | 72.56 | 61.01 | 51.65 | 85.35 | 75.7 | 69.25 |
Acc (%) | ||||||
---|---|---|---|---|---|---|
Model | GS | OSF | TH | WB | NOS | OA |
FCN | 82.3 | 66.85 | 64.31 | 92.26 | 88.31 | 83.68 |
FPN | 79.11 | 45.86 | 52.35 | 86.17 | 86.61 | 79.36 |
DeepLabV3 | 82.89 | 69.48 | 64.06 | 91.82 | 88.57 | 83.98 |
FaPN | 79.47 | 63.9 | 54.92 | 90.13 | 87.58 | 81.1 |
SFNet | 79.17 | 55 | 53.56 | 87.38 | 87.87 | 80.4 |
UperNet | 79.6 | 53.03 | 54.88 | 87.98 | 86.79 | 80.27 |
SegFormer | 84.29 | 70.39 | 64.65 | 92.41 | 86.61 | 83.84 |
UOSAM | 83.03 | 69.68 | 65.78 | 92.1 | 88.34 | 84.15 |
Method | SPFM | GFM | OA (%) | mIoU (%) |
---|---|---|---|---|
UOSAM | × | ✓ | 59.63 | 30.01 |
UOSAM | ✓ | × | 83.84 | 68.79 |
UOSAM | ✓ | ✓ | 84.15 | 69.25 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Xu, Z.; Chen, J.; Niu, H.; Fan, R.; Lu, D.; Feng, R. Boosting Urban Openspace Mapping with the Enhancement Feature Fusion of Object Geometry Prior Information from Vision Foundation Model. Remote Sens. 2025, 17, 1230. https://doi.org/10.3390/rs17071230
Xu Z, Chen J, Niu H, Fan R, Lu D, Feng R. Boosting Urban Openspace Mapping with the Enhancement Feature Fusion of Object Geometry Prior Information from Vision Foundation Model. Remote Sensing. 2025; 17(7):1230. https://doi.org/10.3390/rs17071230
Chicago/Turabian StyleXu, Zijian, Jiajun Chen, Hongyang Niu, Runyu Fan, Dingkun Lu, and Ruyi Feng. 2025. "Boosting Urban Openspace Mapping with the Enhancement Feature Fusion of Object Geometry Prior Information from Vision Foundation Model" Remote Sensing 17, no. 7: 1230. https://doi.org/10.3390/rs17071230
APA StyleXu, Z., Chen, J., Niu, H., Fan, R., Lu, D., & Feng, R. (2025). Boosting Urban Openspace Mapping with the Enhancement Feature Fusion of Object Geometry Prior Information from Vision Foundation Model. Remote Sensing, 17(7), 1230. https://doi.org/10.3390/rs17071230