Next Article in Journal
The Development and Validation of a Novel Smartphone Application to Detect Postural Instability
Previous Article in Journal
GPC-YOLO: An Improved Lightweight YOLOv8n Network for the Detection of Tomato Maturity in Unstructured Natural Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Self-Supervised Learning with Trilateral Redundancy Reduction for Urban Functional Zone Identification Using Street-View Imagery

School of Information and Control Engineering, Qingdao University of Technology, Qingdao 266520, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(5), 1504; https://doi.org/10.3390/s25051504
Submission received: 24 January 2025 / Revised: 20 February 2025 / Accepted: 26 February 2025 / Published: 28 February 2025
(This article belongs to the Section Remote Sensors)

Abstract

In recent years, the use of street-view images for urban analysis has received much attention. Despite the abundance of raw data, existing supervised learning methods heavily rely on large-scale and high-quality labels. Faced with the challenge of label scarcity in urban scene classification tasks, an innovative self-supervised learning framework, Trilateral Redundancy Reduction (Tri-ReD) is proposed. In this framework, a more restrictive loss, “trilateral loss”, is proposed. By compelling the embedding of positive samples to be highly correlated, it guides the pre-trained model to learn more essential representations without semantic labels. Furthermore, a novel data augmentation strategy, tri-branch mutually exclusive augmentation (Tri-MExA), is proposed. Its aim is to reduce the uncertainties introduced by traditional random augmentation methods. As a model pre-training method, Tri-ReD framework is architecture-agnostic, performing effectively on both CNNs and ViTs, which makes it adaptable for a wide variety of downstream tasks. In this paper, 116,491 unlabeled street-view images were used to pre-train models by Tri-ReD to obtain the general representation of urban scenes at the ground level. These pre-trained models were then fine-tuned using supervised data with semantic labels (17,600 images from BIC_GSV and 12,871 from BEAUTY) for the final classification task. Experimental results demonstrate that the proposed self-supervised pre-training method outperformed the direct supervised learning approaches for urban functional zone identification by 19% on average. It also surpassed the performance of models pre-trained on ImageNet by around 11%, achieving state-of-the-art (SOTA) results in self-supervised pre-training.
Keywords: street-view imagery; self-supervised learning; redundancy reduction; urban scene classification; urban functional zone identification street-view imagery; self-supervised learning; redundancy reduction; urban scene classification; urban functional zone identification

Share and Cite

MDPI and ACS Style

Zhao, K.; Li, J.; Xie, S.; Zhou, L.; He, W.; Chen, X. Self-Supervised Learning with Trilateral Redundancy Reduction for Urban Functional Zone Identification Using Street-View Imagery. Sensors 2025, 25, 1504. https://doi.org/10.3390/s25051504

AMA Style

Zhao K, Li J, Xie S, Zhou L, He W, Chen X. Self-Supervised Learning with Trilateral Redundancy Reduction for Urban Functional Zone Identification Using Street-View Imagery. Sensors. 2025; 25(5):1504. https://doi.org/10.3390/s25051504

Chicago/Turabian Style

Zhao, Kun, Juan Li, Shuai Xie, Lijian Zhou, Wenbin He, and Xiaolin Chen. 2025. "Self-Supervised Learning with Trilateral Redundancy Reduction for Urban Functional Zone Identification Using Street-View Imagery" Sensors 25, no. 5: 1504. https://doi.org/10.3390/s25051504

APA Style

Zhao, K., Li, J., Xie, S., Zhou, L., He, W., & Chen, X. (2025). Self-Supervised Learning with Trilateral Redundancy Reduction for Urban Functional Zone Identification Using Street-View Imagery. Sensors, 25(5), 1504. https://doi.org/10.3390/s25051504

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop