Next Article in Journal
Interactions Between SDG 6 and Sustainable Development Goals: A Case Study from Chenzhou City, China’s Sustainable Development Agenda Innovation Demonstration Area
Previous Article in Journal
UK—A Century of Failing (and Sometimes Succeeding) at Value Capture
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

SE-ResUNet Using Feature Combinations: A Deep Learning Framework for Accurate Mountainous Cropland Extraction Using Multi-Source Remote Sensing Data

by
Ling Xiao
1,2,
Jiasheng Wang
1,2,*,
Kun Yang
1,2,
Hui Zhou
1,2,
Qianwen Meng
1,2,
Yue He
1,2 and
Siyi Shen
1,2
1
Faculty of Geography, Yunnan Normal University, Kunming 650500, China
2
The Engineering Research Center of GIS Technology in Western China of Ministry of Education of China, Yunnan Normal University, Kunming 650500, China
*
Author to whom correspondence should be addressed.
Land 2025, 14(5), 937; https://doi.org/10.3390/land14050937
Submission received: 12 March 2025 / Revised: 18 April 2025 / Accepted: 23 April 2025 / Published: 25 April 2025
(This article belongs to the Section Land Innovations – Data and Machine Learning)

Abstract

The accurate extraction of mountainous cropland from remote sensing images remains challenging due to its fragmented plots, irregular shapes, and the terrain-induced shadows. To address this, we propose a deep learning framework, SE-ResUNet, that integrates Squeeze-and-Excitation (SE) modules into ResUNet to enhance feature representation. Leveraging Sentinel-1/2 imagery and DEM data, we fuse vegetation indices (NDVI/EVI), terrain features (Slope/TRI), and SAR polarization characteristics into 3-channel inputs, optimizing the network’s discriminative capacity. Comparative experiments on network architectures, feature combinations, and terrain conditions demonstrated the superiority of our approach. The results showed the following: (1) feature fusion (NDVI + TerrainIndex + SAR) had the best performance (OA: 97.11%; F1-score: 96.41%; IoU: 93.06%), significantly reducing shadow/cloud interference. (2) SE-ResUNet outperformed ResUNet by 3.53% for OA and 8.09% for IoU, emphasizing its ability to recalibrate channel-wise features and refine edge details. (3) The model exhibited robustness across diverse slopes/aspects (OA > 93.5%), mitigating terrain-induced misclassifications. This study provides a scalable solution for mountainous cropland mapping, supporting precision agriculture and sustainable land management.
Keywords: feature combination; SE module; ResUNet network; mountain cropland; remote sensing feature combination; SE module; ResUNet network; mountain cropland; remote sensing

Share and Cite

MDPI and ACS Style

Xiao, L.; Wang, J.; Yang, K.; Zhou, H.; Meng, Q.; He, Y.; Shen, S. SE-ResUNet Using Feature Combinations: A Deep Learning Framework for Accurate Mountainous Cropland Extraction Using Multi-Source Remote Sensing Data. Land 2025, 14, 937. https://doi.org/10.3390/land14050937

AMA Style

Xiao L, Wang J, Yang K, Zhou H, Meng Q, He Y, Shen S. SE-ResUNet Using Feature Combinations: A Deep Learning Framework for Accurate Mountainous Cropland Extraction Using Multi-Source Remote Sensing Data. Land. 2025; 14(5):937. https://doi.org/10.3390/land14050937

Chicago/Turabian Style

Xiao, Ling, Jiasheng Wang, Kun Yang, Hui Zhou, Qianwen Meng, Yue He, and Siyi Shen. 2025. "SE-ResUNet Using Feature Combinations: A Deep Learning Framework for Accurate Mountainous Cropland Extraction Using Multi-Source Remote Sensing Data" Land 14, no. 5: 937. https://doi.org/10.3390/land14050937

APA Style

Xiao, L., Wang, J., Yang, K., Zhou, H., Meng, Q., He, Y., & Shen, S. (2025). SE-ResUNet Using Feature Combinations: A Deep Learning Framework for Accurate Mountainous Cropland Extraction Using Multi-Source Remote Sensing Data. Land, 14(5), 937. https://doi.org/10.3390/land14050937

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop