Next Article in Journal
Analyzing Urban Parks for Older Adults’ Accessibility in Summer Using Gradient Boosting Decision Trees: A Case Study from Tianjin, China
Next Article in Special Issue
Conversion from Forest to Agriculture in the Brazilian Amazon from 1985 to 2021
Previous Article in Journal
Assessment of Soil Organic Matter and Its Microbial Role in Selected Locations in the South Bohemia Region (Czech Republic)
Previous Article in Special Issue
Determining Dominant Factors of Vegetation Change with Machine Learning and Multisource Data in the Ganjiang River Basin, China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Vegetation Classification in a Mountain–Plain Transition Zone in the Sichuan Basin, China

1
State Key Laboratory of Geohazard Prevention and Geoenvironment Protection, Chengdu University of Technology, Chengdu 610059, China
2
College of Geography and Planning, Chengdu University of Technology, Chengdu 610059, China
3
Department of Geography, Environment and Population, The University of Adelaide, Adelaide 5000, Australia
4
Laboratory for Interdisciplinary Spatial Analysis (LISA), Department of Land Economy, University of Cambridge, Cambridge CB3 9EP, UK
5
School of Statistics, Dongbei University of Finance and Economics, Dalian 116025, China
6
School of Land Resources and Surveying and Mapping Engineering, Shandong Agricultural and Engineering University, Jinan 250100, China
*
Authors to whom correspondence should be addressed.
Land 2025, 14(1), 184; https://doi.org/10.3390/land14010184
Submission received: 16 December 2024 / Revised: 10 January 2025 / Accepted: 14 January 2025 / Published: 17 January 2025
(This article belongs to the Special Issue Vegetation Cover Changes Monitoring Using Remote Sensing Data)

Abstract

:
Developing an effective vegetation classification method for mountain–plain transition zones is critical for understanding ecological patterns, evaluating ecosystem services, and guiding conservation efforts. Existing methods perform well in mountainous and plain areas but lack verification in mountain–plain transition zones. This study utilized terrain data and Sentinel-1 and Sentinel-2 imagery to extract topographic, spectral, texture, and SAR features as well as the vegetation index. By combining feature sets and applying feature elimination algorithms, the classification performance of one-dimensional convolutional neural networks (1D-CNNs), Random Forest (RF), and Multilayer Perceptron (MLP) was evaluated to determine the optimal feature combinations and methods. The results show the following: (1) multi-feature combinations, especially spectral and topographic features, significantly improved classification accuracy; (2) Recursive Feature Elimination based on Random Forest (RF-RFE) outperformed ReliefF in feature selection, identifying more representative features; (3) all three algorithms performed well, with consistent spatial results. The MLP algorithm achieved the best overall accuracy (OA: 81.65%, Kappa: 77.75%), demonstrating robustness and lower dependence on feature quantity. This study presents an efficient and robust vegetation classification workflow, verifies its applicability in mountain–plain transition zones, and provides valuable insights for small-region vegetation classification under similar topographic conditions globally.

Graphical Abstract

1. Introduction

Mountainous areas account for approximately a quarter of the Earth’s total land area [1]. A mountain–plain transition zone consists of plains (with an elevation below 200 m above the sea), hills (200–500 m), and mountains. Mountains can be further categorized into low mountains (50–500 m), medium mountains (500–2500 m, with terrain relief exceeding 100 m or a slope gradient steeper than 25°), and high mountains (>2500 m). A mountain–plain transition zone is usually characterized by diverse vegetation types due to the distinct convergence of horizontal and vertical land covers [2,3]. Cropland, grassland, shrubland, and forests play significant roles in sustainable agriculture, soil and water conservation, climate regulation, and biodiversity protection [4]. However, unique stepped geomorphic structures in mountain–plain transition zones can lead to poor surface material stability, fragile ecosystems, and frequent mountain hazards. Human activities such as deforestation and agricultural expansion further disrupt ecosystems, exacerbating economic development and ecological conservation in many transition zones [5,6]. For instance, deforestation in mountainous areas often leads to significant soil erosion, reducing soil fertility and water retention capacity, which in turn affects downstream agricultural productivity and water resources. Similarly, agricultural expansion in plains, often associated with monoculture farming, disrupts biodiversity and alters carbon cycling processes. These land-use changes significantly impact ecological indicators such as soil stability, vegetation health, and water quality. To protect and manage resources effectively in such zones, advanced methods for mapping high-resolution vegetation are pressingly needed to identify and monitor various vegetation types more accurately. However, complex terrains, steep slopes, and divergent vegetation types in mountain–plain transitional zones make traditional field surveys infeasible, as field surveys require extensive human and financial resources and thus often fail to provide timely real-world data [7]. To address methodological challenges, remote sensing techniques have unique advantages in terms of multi-temporal and multispectral data acquisition, cost-effectiveness, and efficiency and thus have become an increasingly effective tool for vegetation classification [8].
Applying remote sensing techniques to vegetation identification involves extensive uses of information sourced from MODIS [9], Landsat [10], Sentinel [11], Gaofen (GF) satellite data [12], and UAV imagery [13]. MODIS, with a high temporal but low spatial resolution, is suitable for large-scale dynamic monitoring when combined with other datasets. Landsat provides a medium-scale spatial resolution and is thus well suited for vegetation classification over a large area [14,15]. Sentinel series data, with its high spatial resolution (10 m), frequent revisit cycle (5 days), and high spatio-temporal resolutions, offers significant advantages in deciphering small-scale imagery, monitoring fine changes in vegetation, and addressing challenges in both mountainous and plain areas [16]. The fine spatial resolution of Sentinel-2 enables improved vegetation classification in terrain-shadowed mountainous regions, while the radar capabilities of Sentinel-1 facilitate effective monitoring under cloudy conditions [7,17]. These advantages, combined with its global coverage and free accessibility, ensure the reliability and applicability of Sentinel data for vegetation classification in mountain–plain transition zones.
Remote sensing classification mainly consists of Object-Based Image Analysis (OBIA) and Pixel-Based Classification (PBC) [18,19]. Studies have shown that, compared with traditional pixel-based methods, object-based methods (which utilize features such as shape and texture) better preserve spatial information and reduce noise in classification results. The OBIA methods often produce more accurate classification and perform well in vegetation classification across diverse geographic regions, comprising mountains, plains, and watersheds [20,21].
Including too many features can reduce algorithm efficiency and dilute the importance of key features, causing decreased accuracy in vegetation recognition [22]. To address this challenge, researchers have attempted to combine different features to achieve better classification performance [23,24]. While such combinations can somewhat improve accuracy, they can induce new problems such as collinearity and data redundancy [25]. To enhance feature selection efficiency and identify key features, feature selection algorithms like Recursive Feature Elimination based on Random Forest (RF-RFE) and ReliefF have been widely applied to studies of grasslands, forests, and urban vegetation types [25,26,27,28]. However, no studies have yet verified which feature selection method performs best for vegetation type recognition in the mountain–plain transition zones.
Remote sensing-based vegetation recognition methods primarily include supervised, unsupervised, and machine learning approaches [29]. Supervised and unsupervised methods are widely used. However, achieving high-accuracy classification in the complex terrain of a mountain–plain transition zone faces challenges not only in data acquisition but also in terms of the limitations in classification algorithms, making it even harder to meet the demand for fine-grained vegetation classification [30].
Machine learning methods, known for their outstanding capabilities in nonlinear data processing and feature extraction, have been widely applied in vegetation classification [31]. In plains, methods commonly used for vegetation extraction include Random Forest (RF) [32,33], convolutional neural networks (CNNs), Support Vector Machine (SVM), and Backpropagation Neural Networks (BPNNs), which are mainly applied to grassland and crop identification [34,35,36,37]. In mountainous regions, methods of RF, one-dimensional convolutional neural networks (1D-CNNs) [38,39], hierarchical classifiers, and Gradient Tree Boosting (GTB) are employed to classify various forest types [7,40,41]. However, the performance of these methods in the complex terrains of mountain–plain transition zones remains unclear, highlighting the need to explore the potential applications of machine learning methods in such transitional landscapes.
To address this, we selected a 1D-CNN [38,39], RF [32,33], and Multilayer Perceptron (MLP) [42,43] for testing. These algorithms demonstrate distinct strengths under complex terrain conditions and excellent performance in mountain vegetation classification. For example, MLP excels at capturing global features in continuous vegetation transitions; RF is highly effective in terms of noise resistance and efficient feature selection, making it suitable for distinguishing vegetation types with subtle spectral differences; and a 1D-CNN, with its ability to extract local spatial patterns, is particularly well suited to areas with complex terrains and high vegetation heterogeneity. These characteristics make the selected algorithms more capable of meeting the demands for efficient vegetation classification in mountain–plain transition zones.
Mountain–plain transition zones pose several challenges for remote sensing-based vegetation mapping. First, the availability of data sources is limited due to frequent cloud and fog coverage, and thus data are susceptible to terrain shadows. For instance, in the mountainous regions of Mianzhu City, persistent cloud cover during the rainy season leads to incomplete satellite imagery, necessitating additional cloud removal and terrain correction, which dramatically increases the complexity of data preprocessing [44,45]. Second, vegetation types in mountainous regions are primarily influenced by elevation and climate, with forests being a good example [46,47]. In contrast, vegetation in plains (predominantly croplands and grasslands) is mainly influenced by human settlements and policy factors [48]. For example, forest patches in the mountains of Mianzhu often transition to grasslands or croplands in the adjacent plains due to land-use policies, resulting in highly heterogeneous vegetation patterns. Therefore, selecting features that account for both mountain and plain vegetation characteristics remains a significant challenge. Moreover, the applicability of classification algorithms is limited. While these methods have achieved sound results in standalone mountain or plain areas, their effectiveness in a more complicated terrain like a mountain–plain transition zone remains to be verified.
To address these challenges, this study selected Mianzhu City of Sichuan province in China as the case study area. Mianzhu is geographically characterized by both mountainous and plain terrains. Using an object-based classification approach and Sentinel-1, Sentinel-2, and DEM data, we extracted spectral, texture, topographic, and SAR features as well as the vegetation index. By integrating three machine learning algorithms (1D-CNN, MLP, and RF), we systematically evaluated the performance of different feature combinations and feature selection methods (RFE-RF and ReliefF) for vegetation classification in this mountain–plain transition zone. This study contributes to the literature in three aspects. First, it analyzes the effects of different feature combinations on vegetation classification in a mountain–plain transition zone. Second, it compares the performance of RF-RFE and ReliefF feature selection algorithms to determine the optimal feature combination for vegetation classification in the study area. Third, it evaluates the performance of 1D-CNN, MLP, and RF in vegetation classification within the mountain–plain transition zone, providing both theoretical and practical support for high-accuracy vegetation classification in complex terrains. Additionally, it addresses the Special Issue’s question “How can machine learning and artificial intelligence improve the analysis and interpretation of remote sensing data for vegetation monitoring?”

2. Study Area and Data

2.1. Study Area

To ensure the applicability of this research, the administrative boundary was used as the geographical unit for the study. The study area, Mianzhu City, is located at the junction of the Sichuan Basin and the Daba Mountains (103°54′–104°20′ E, 31°09′–31°42′ N) (Figure 1). It represents a typical transition zone between mountainous regions and plains. The terrain slopes from northwest to southeast, encompassing an area of 1246.20 km2. The plains are primarily located in the southeast, accounting for 49.2% of the total area of Mianzhu City, while the rest of the area (50.8%) consists of mountainous regions in the northwest. The elevation ranges from 504 to 4406 m (with a relative height difference of 3902 m). The study area belongs to the subtropical monsoon climate zone, with an average annual temperature of 16 °C. The mean air temperature in the coldest month (January) is 5.4 °C and 25.3 °C in the hottest month (July). The average annual precipitation is 968.4 mm. The soil types in Mianzhu City are influenced by topography and climate, forming distinct vertical zonation during their development. From high to low altitudes, the region includes mountain dark brown soil, mountain brown soil, mountain podzolic soil, mountain yellow soil, mountain yellow-brown soil, and paddy soil. The city’s complex geographic features, warm and rainy climate, and fertile soil contribute to its rich flora [49].
The mountainous areas in the northwest are mainly covered by evergreen broadleaf and coniferous forests, and shrublands, while the plains in the southeast are mainly cultivated with grain crops. Some nationally protected areas such as the Giant Panda National Park are distributed in the study area, and thus the vegetation resources are well preserved. Mianzhu City has a forest coverage rate of 51%; its predominant land-use categories include forest (617.03 km2), agricultural land (301.87 km2), construction land (178.17 km2), and aquatic bodies (38.53 km2). The development intensity of territorial space is 14.3% [50]. This unique geographic setting provides an ideal basis for investigating ecological and vegetation classification in transition zones.

2.2. Data Source and Data Processing

The sample data primarily originate from the Forest resources type II survey data [51], the Third National Land Survey [52], and the vegetation map of China (1:1,000,000) [53,54], which were completed in 2018–2020. Due to the various data sources, this study intersected multiple datasets, with areas classified as the same vegetation type across sources defined to be sample plots. After excluding bare land, the study focused on six vegetation types: shrubland, evergreen broadleaf forest, deciduous broadleaf forest, coniferous forest, grassland, and cultivated plants. The data used in this study were derived from coarse-scale mapping, which introduced a certain degree of misclassification. To enhance the accuracy of the samples, we conducted further filtering and correction based on high-resolution Google Earth imagery from the winter and summer of 2019 in Mianzhu City. Specifically, sample points inconsistent with the vegetation types in the actual imagery were removed, including non-vegetation points identified in the imagery; differentiation between deciduous broadleaf forest and evergreen broadleaf forest based on winter imagery; and cropland points misclassified as grassland. Although existing mapping efforts are available for the region, their low spatial resolution limits their applicability to fine-scale vegetation studies. This study aims to generate a higher-resolution vegetation type distribution map to better support regional environmental management.
Sample points were uniformly selected at 250 m intervals within the plots, resulting in a total of 28,533 points, including 5751 for shrubland, 4339 for evergreen broadleaf forest (EBF), 4898 for deciduous broadleaf forest (DBF), 5134 for coniferous forest (CF), 2647 for grassland, and 5764 for cropland. The sample counts for each vegetation type were proportional to their area. Of these, 60% (17,119) were used as the training set, 20% (5707) as the validation set, and 20% (5707) as the test set. The reader can refer to Table 1 for the specific sample counts of each vegetation type in the training, validation, and test sets, and see Figure 1c for the distribution.
Sentinel-2 data, provided by the European Space Agency (ESA), includes the Sentinel-2A satellite launched in 2015 and the Sentinel-2B satellite launched in 2017. They cover 13 multispectral bands with spatial resolutions of 10 m, 20 m, and 60 m, spanning the spectrum from visible to shortwave infrared [55]. This study used atmospherically corrected Sentinel-2 L2A data. To enhance the accuracy of vegetation classification, built on existing expert knowledge [7], we deployed multi-temporal imagery (summer and winter) for data analysis. Using Google Earth Engine (GEE, https://earthengine.google.com/, accessed on 15 April 2024) and Sentinel Application Platform (SNAP, version 7.0), the red-edge vegetation and shortwave infrared bands were resampled to a 10 m spatial resolution using the nearest-neighbor method. Spectral reflectance, vegetation indices, and texture features were extracted, normalized, and recategorized into spectral features, vegetation indices, and texture features. To ensure temporal consistency with the ground reference datasets and minimize temporal discrepancies between satellite observations and ground information, we selected Sentinel-2 imagery acquired on 16 December 2018 (winter) and 1 July 2019 (summer). The study area was mosaicked and clipped using four Sentinel-2 images. After careful consideration of cloud cover, the average cloud coverage for winter imagery was 2.91%, and for summer imagery it was 2.61%. These low cloud coverage rates ensured high-quality data for subsequent processing and analysis.
Sentinel-1 data, provided by the ESA, include C-band Synthetic Aperture Radar (SAR) imagery from Sentinel-1A (launched in 2014) and Sentinel-1B (launched in 2016), featuring VV and VH polarization modes. The key advantage of Sentinel-1 lies in its ability to provide continuous imagery regardless of daytime, nighttime, or weather conditions [56]. The Level-1 Ground Range Detected (GRD) data used in this study were obtained through GEE (https://earthengine.google.com/, accessed on 15 April 2024). These data underwent the following preprocessing steps: (1) the application of satellite orbit data to remove systematic errors; (2) GRD border noise removal and invalid data clearance; (3) thermal noise removal; (4) the conversion of digital pixel values to radiometrically calibrated SAR backscatter; (5) terrain correction using a DEM; and (6) decibel conversion of backscatter coefficients [57]. After preprocessing, the VV and VH backscatter coefficients were used as input data for the classification algorithms. To ensure temporal consistency with Sentinel-2 imagery acquisition, Sentinel-1 winter imagery was acquired on 26 December 2018 and summer imagery on 1 July 2019.
The digital elevation model (DEM) data are derived from the ASTER GDEM dataset, with a spatial resolution of 30 m. Slope and aspect were extracted from the DEM data, normalized, and used as topographic features. Additionally, the DEM raster was converted to a point grid with 30 m spacing to cover the study area, facilitating subsequent vegetation predictions across the entire region.
The six datasets mentioned above cover multidimensional information of the study area, such as terrain features, vegetation types, and image characteristics. These datasets exhibit high consistency in terms of temporal resolution, spatial resolution, and classification accuracy, meeting the requirements of this study. Detailed accuracy and consistency analyses are provided in Appendix A.
To support data processing, model implementation, and evaluation, we used several Python libraries in Visual Studio Code (version 1.95.0). Scikit-learn was employed for implementing RF and MLP algorithms, as well as for data splitting (train_test_split), feature scaling (MinMaxScaler), and calculating evaluation metrics such as accuracy_score, cohen_kappa_score, and confusion_matrix. PyTorch was applied to construct and train the 1D-CNN algorithm, including modules for model definition, data loaders (torch.utils.data), and optimization (torch.optim.Adam). Additionally, pandas and numpy were used for data manipulation and numerical operations, while matplotlib and seaborn were utilized for visualizing data distributions, feature importance, and confusion matrices. These libraries were selected for their reliability, efficiency, and ability to streamline machine learning and deep learning workflows.

2.3. Feature Engineering

Constructing robust feature engineering is a critical step for extracting high-accuracy information from remote sensing imagery. It enables the extraction of relevant and differentiated information from heterogeneous data sources and encapsulates the distinct characteristics exhibited by divergent vegetation types. In mountainous regions, vegetation distribution is primarily influenced by elevation and climate, with vegetation types transitioning from broadleaf forests to coniferous forests and shrublands as elevation rises [46]. In contrast, the distribution of vegetation across the plains is constrained by land-use management practices and water resource availability; the dominant vegetation consists of cultivated plants and grasslands. Spectral features are crucial for distinguishing these two vegetation types [25]. To improve the accuracy of vegetation classification in the study area, using machine learning algorithms, we constructed three-dimensional features: terrain, texture, and spectral features (Table 2). Terrain features were derived from DEM data, with slope and aspect calculated from the DEM. Texture features, which vary significantly across different vegetation types, have been proven to play a critical role in improving the accuracy of vegetation classification [28]. To reduce the dimensionality of optical bands, Principal Component Analysis (PCA) was applied to extract key information from preprocessed Sentinel-2 data. The top three principal components (PC1, PC2, and PC3) with cumulative eigenvalues exceeding 95% were selected for texture feature computation [22]. A total of 80 features were calculated (see Table 3). All selected features were normalized to prevent differences in feature magnitudes and value ranges from affecting the convergence speed and classification accuracy of the three machine learning algorithms [7]. Spectral features were based on the bands of Sentinel-1 and Sentinel-2, and vegetation indices were calculated using Sentinel-2 data.
The list of features is shown in Table 3. Terrain features are critical for understanding vegetation distribution in the mountain–plain transition zone. Including the correlation of elevation, slope, and aspect with the distribution of vegetation in the input dataset helps improve the classification accuracy through machine learning models. Different vegetation types exhibit distinct spectral features and vegetation indices. When used as input features, these distinctions enable machine learning models to effectively identify vegetation differences. Moreover, seasonal variations play a significant role. For example, evergreen broadleaf forests show minimal differences between winter and summer, while deciduous broadleaf forests exhibit substantial variations. Therefore, it is essential to incorporate both winter and summer spectral features and vegetation indices into the input dataset. For vegetation types with smaller seasonal differences, such as shrublands, coniferous forests, and evergreen broadleaf forests, calculating the difference between winter and summer vegetation indices and normalizing the results can enhance their differences, further improving model classification accuracy.
The introduction of texture features from both summer and winter (e.g., contrast, entropy, homogeneity, mean, and correlation) captures spatial heterogeneity and structural complexity in vegetation distribution. For example, shrublands typically exhibit high heterogeneity, while coniferous forests or grasslands may show greater homogeneity. Deciduous broadleaf forests demonstrate higher contrast and lower mean texture values in winter due to leaf fall, whereas evergreen broadleaf forests exhibit minimal seasonal changes in texture features. These texture features enhance the ability of machine learning models to differentiate vegetation types.
Additionally, Sentinel-1 SAR data complement Sentinel-2A data by providing supplementary information on structure, moisture, and seasonal variations. Integrating Sentinel-1 data into the machine learning model’s input further improves the classification accuracy of vegetation types in the mountain–plain transition zone.

3. Methods

Figure 2 presents the methodological framework for classifying vegetation types in mountain–plain transition zones using machine learning with remote sensing data. This framework integrates two feature selection algorithms (ReliefF and RF-RFE), three machine learning models (1D-CNN, MLP, and RF), and accuracy assessment through confusion matrices. Feature selection, vegetation type classification, and accuracy assessment were implemented in this study using Python programming language. This enables cost-effective vegetation classification and can be applied to any mountain–plain transition zone, providing valuable support for environmental management.

3.1. Vegetation Classification Model

3.1.1. 1D-CNN Algorithm

The 1D-CNN algorithm, developed by LeCun et al. [66], is a deep learning algorithm designed for one-dimensional sequential data. It particularly suits feature extraction from time-series and signal data. Its core functionality involves using one-dimensional convolutional kernels to extract local features: patterns and dependencies within the sequence. A 1D-CNN consists of convolutional layers, pooling layers, and fully connected layers, where the convolutional layers extract features and the pooling layers reduce dimensionality to enhance generalization. The algorithm relies on backpropagation and gradient descent to optimize the weights of the convolutional kernels.
1D-CNNs excel in sequential data analysis and are widely used in signal processing, bioinformatics, and natural language processing. Compared with traditional fully connected neural networks, 1D-CNNs are more parameter-efficient and thus can automatically identify local patterns. They exhibit strong noise resistance and are computationally efficient, making them well suited for high-dimensional and long-sequence data [7]. In this study, the input consisted of 18 features with a stride of 1. The network architecture included three convolutional layers (with a kernel size of 2), one fully connected layer, and a Softmax layer (Figure 3).

3.1.2. MLP Algorithm

The MLP algorithm is a powerful feed-forward neural network designed to perform classification or regression tasks by learning the nonlinear mapping relationships in data [67]. Typical MLP consists of an input layer, hidden (intermediate) layers, and an output (last) layer (Figure 4). The input layer receives feature information from the data. The hidden layers combine and extract features, while the output layer generates the model’s predictions. In MLP, each neuron connects to every neuron in the previous layer, calculating a weighted sum of the input signals, which then undergoes a nonlinear transformation through an activation function to produce an output value. The output travels to the next layer’s neurons, passing through the network until reaching the output layer. During training, MLP uses backpropagation to optimize weights by minimizing the loss function, adjusting weights to enable predictions closer to actual labels.

3.1.3. The RF Algorithm

The Random Forest (RF) algorithm is an ensemble learning algorithm that can significantly improve classification and regression accuracy by constructing multiple decision trees [68]. The algorithm employs bagging and random feature selection strategies, where it randomly samples from the original dataset and randomly selects a subset of features at each node to generate multiple decision trees. The final prediction is obtained by voting for (for classification) or averaging (for regression) the predictions from all neural trees, thereby improving the model’s robustness and generalization capability. RF performs well with high-dimensional data and can provide feature importance, making it widely applicable to vegetation classification in ecology and remote sensing fields. The RF algorithm in this study generates 400 decision trees by randomly sampling the data, and the final prediction is determined through a voting mechanism (Figure 5).

3.2. Feature Removal Algorithm

The quality of feature selection significantly impacts the design and performance of classifiers. Feature selection adheres to two principles: first, ensuring that the features can easily distinguish objects; second, using as few features as possible while maintaining accuracy. This study conducts a comparative analysis of two feature selection algorithms—ReliefF [69] and RF-RFE [70]. The ReliefF algorithm is a classical multivariate filter-based feature selection method that determines feature importance by evaluating its correlation with class labels. Specifically, ReliefF randomly selects sample points in the feature space and identifies the nearest same-class (“nearest neighbor”) and different-class (“nearest enemy”) samples for each point. It then computes the feature distances between the target sample and its neighbors, updating feature weights based on the distance differences to reflect their contribution to classification [26]. ReliefF is well suited for selecting features in multi-class and noisy data scenarios. It is widely used to identify features with strong classification capabilities. Recursive Feature Elimination with Random Forest (RF-RFE) is a feature selection method based on Recursive Feature Elimination. This approach first uses the RF algorithm to assess the importance of each feature. Subsequently, the least important features are recursively removed, and the Random Forest model is retrained at each iteration. The process iterates until a predefined number of features is reached or optimal accuracy is achieved. By leveraging the robustness and nonlinear capabilities of the Random Forest algorithm, RF-RFE effectively identifies the most predictive features, making it particularly suitable for high-dimensional data scenarios [71].

3.3. Accuracy Assessment

The accuracy of prediction results determines the reliability of classification. This study uses a confusion matrix, which includes several metrics to evaluate vegetation classification results: Producer’s Accuracy (PA), User’s Accuracy (UA), Overall Accuracy (OA), Average Accuracy (AA), and the Kappa coefficient. The specific definitions of each metric are provided in Townsend (1971) [72].

4. Results

4.1. Classification Accuracy of Different Feature Combinations

In this study, spectral features, texture features, terrain features, vegetation indices, and SAR features were sequentially added to the three machine learning algorithms to investigate the impact of different feature combinations on classification accuracy and execution time (Table 4); the time unit is expressed in seconds (s). To minimize errors caused by sample selection variability, all three algorithms used the same training and validation samples, with the random seed set to 42.
Based on Table 4 and Figure 6, among single features (F1–F5), spectral features (F1) achieved the highest classification accuracy in the 1D-CNN, MLP, and RF algorithms, reaching 75.29%, 79.74%, and 75.95%, respectively. Therefore, for efficiency, subsequent feature combinations primarily focused on spectral features to evaluate their accuracy in the three machine learning algorithms. Sentinel-1 features (F5) showed the lowest accuracy and were excluded from subsequent three-feature and four-feature combinations.
For two-feature combinations (F6–F9), the combination of spectral and terrain features (F8) achieved the highest classification accuracy in 1D-CNN, MLP, and RF, with accuracies of 76.96%, 82.13%, and 80.23%, respectively. Among three-feature combinations (F10-F12), the MLP algorithm performed best with the combination of spectral features, vegetation indices, and terrain features (F11), achieving an accuracy of 82.29%. Meanwhile, 1D-CNN and RF performed best with the combination of spectral, texture, and terrain features (F12), achieving accuracies of 77.75% and 79.95%, respectively.
For four-feature (F13) and five-feature combinations (F14), the MLP and RF algorithms performed best with the combination of spectral, terrain, and texture features and vegetation indices (F13), achieving accuracies of 80.29% and 79.13%, respectively. The 1D-CNN algorithm performed best with the five-feature combination (F15: spectral, terrain, texture, vegetation indices, and Sentinel-1 SAR), achieving an accuracy of 79.38%.
Overall, spectral features (F1) demonstrated significant importance across all algorithms, while the combination of terrain and spectral features (e.g., F8 and F11) further improved classification performance, highlighting the complementary role of the features. This result indicates that multi-feature combinations are beneficial for enhancing the accuracy of vegetation classification. The RF algorithm was highly sensitive to terrain and spectral features (F8), achieving its highest accuracy when these two features were combined. Building on this, adding vegetation indices (F11) enabled the MLP algorithm to achieve the highest accuracy. As the number of input features increased, the classification accuracy of the 1D-CNN algorithm improved consistently, reaching its best performance when all features (F14) were included. In terms of execution time (Table 3), the RF algorithm was the most efficient, while the 1D-CNN algorithm resulted in the lowest efficiency.

4.2. Comparison Between RF-RFE and ReliefF Feature Optimization Algorithms

This study used a total of 80 features. To avoid multicollinearity among features, those with correlations above 0.96 were removed, leaving 72 features. To further improve classification accuracy and computational efficiency, and to reduce redundancy within the same category, these 72 features were filtered using two feature selection methods. The RF-RFE algorithm ultimately retained 18 features. For better comparison, the ReliefF algorithm also retained the top 18 features based on weight rankings. The feature weights and retention results are reported in Table 5. RF-RFE retained four types of features, with the highest contributions from topographic and spectral features. ReliefF retained five types of features, with topographic features and vegetation indices showing the highest contributions.
The difference in the retained features reflects the distinct selection mechanisms of the two algorithms. RF-RFE selects features by recursively eliminating the least important ones based on the importance scores calculated by the Random Forest model, prioritizing features that minimize redundancy and maximize model performance. In contrast, ReliefF ranks features by evaluating their ability to distinguish between neighboring samples of different classes, focusing on feature relevance rather than redundancy. These differing criteria explain the variation in the final selected features.
When the selected features were input into the three classification models to compare classification accuracy and runtime (Table 6), RF-RFE outperformed ReliefF. Specifically, the features selected by RF-RFE achieved higher classification accuracy across all classification methods, with particularly outstanding performance with the MLP algorithm. This indicates that RF-RFE better addresses the specific classification challenges in the mountain–plain transition zone of the study area. Therefore, in subsequent vegetation classification predictions for the entire study area, the 18 features retained by RF-RFE were used as the final feature set.

4.3. Comparison of Mountain Vegetation Mapping Based on Different Classifiers

Based on the 18 optimized features selected by the RF-RFE algorithm, final classification results were obtained for the training, testing, and prediction sets across the three classifiers (Figure 7). Cultivated vegetation is primarily distributed in the southern plains surrounding residential areas. Grasslands are mainly distributed along rivers and roads, with some areas near cultivated plants. Forests are mainly located in the northern mountainous regions, where, with increasing elevation, the vegetation transitions through evergreen broadleaved forest (EBF), deciduous broadleaved forest (DBF), shrubland, and coniferous forest (CF).
According to Figure 8, the most widely distributed vegetation type in Mianzhu City is cropland, followed by shrubland, CF, DBF, and EBF, with grassland covering the smallest area.

4.4. Accuracy Assessment of Classification Results

The overall classification accuracy of the three models and their classification performance for different vegetation types are detailed in Table 7. For shrubland, PA ranges from 82.95% to 90.18% and UA from 76.7% to 77.59% across the three models, indicating that shrubland is more likely to be omitted than misclassified. RF is the model most prone to omission and misclassification for shrubland. For EBF, PA ranges from 51.89% to 61.56% and UA from 69.05% to 71.73%, suggesting that EBF is more likely to be misclassified than omitted. MLP and 1D-CNN are the models most prone to omission and misclassification for EBF, with 1D-CNN and MLP showing the fewest cases of each, respectively. For DBF, PA ranges from 75.47% to 80.25% and UA from 70.39% to 73.22%, indicating that DBF has a higher chance of omission than misclassification. RF and MLP are the models most prone to omission and misclassification, while 1D-CNN and MLP show the fewest cases of each. For CF, PA ranges from 78.98% to 82.64% and UA from 79.72% to 86.67%, indicating a higher likelihood of misclassification than omission. RF and 1D-CNN are the most prone to omission and misclassification, with RF and MLP having the highest rates for each. For grassland, PA ranges from 75.84% to 88.16% and UA from 85.67% to 91.01%, showing that misclassification is more likely than omission. MLP and 1D-CNN are the most prone to both, but 1D-CNN and MLP have the fewest cases. For cropland, PA ranges from 90.78% to 96% and UA from 91.81% to 96.73%, also indicating a higher likelihood of misclassification than omission. 1D-CNN and MLP are the models most prone to each, but 1D-CNN and MLP exhibit the fewest cases.
Overall, among the three models, MLP shows the best performance in terms of OA, the most balanced classification accuracy (AA), and the strongest model reliability (Kappa). Although each model varies in performance across vegetation types, shrubland, grassland, and cropland show relatively good classification results across all models. However, for the complex EBF category, all three models exhibit misclassifications, indicating that improving classification accuracy for this category remains a key focus for future research.
Figure 9 shows that cropland has the best classification results across all models (1D-CNN: 96.73%; MLP: 91.81%; RF: 93.59%), with the most frequent misclassification occurring as grassland. EBF has the poorest classification results across all models (1D-CNN: 51.89%; MLP: 61.56%; RF: 54.6%), with the most frequent misclassification as DBF.
Overall, 1D-CNN achieves the highest prediction accuracy for cropland but shows notable misclassifications between shrubland and EBF. MLP has better accuracy for grassland predictions but higher misclassification rates for EBF and DBF. RF exhibits greater misclassification between shrubland and EBF but has higher accuracy for DBF, grassland, and cropland.
To further compare differences among classifiers, two sub-regions within the study area were selected for localized analysis. One region is located in the mountainous northern part of the study area (Figure 10b), while the other is in the southwestern part, containing a mix of mountains and plains (Figure 10f). In the elliptical region of Figure 10b, the actual vegetation type is primarily EBF, with a smaller proportion of shrubland. However, in the results using the 1D-CNN algorithm (Figure 10c) and RF algorithm (Figure 10e), parts of EBF were misclassified as shrubland. In contrast, the MLP algorithm (Figure 10d) produced results more consistent with the actual vegetation distribution in the study area. In the elliptical region of Figure 10f, the actual vegetation type is predominantly cropland, with grassland confined to areas near rivers. However, the 1D-CNN algorithm (Figure 10g) misclassified parts of cropland as grassland. By comparison, the MLP algorithm (Figure 10h) and the RF algorithm (Figure 10i) accurately identified grassland and cropland, producing results that align better with the actual vegetation distribution in the study area.

5. Discussion

5.1. Discussion on Feature Combinations and Feature Selection Methods

This study investigated the impact of different feature combinations on vegetation classification performance. The results showed that spectral features had the highest contribution, consistent with the findings of Wang et al. [24] and Fu et al. [22]. Therefore, subsequent multi-feature combinations were based on spectral features, with other features added incrementally. Regarding multi-feature combinations, our results align with those of Wang et al. [24]. Therefore, subsequent multi-feature combinations were based on spectral features, with other features incrementally added. The combination of spectral and terrain features, as observed in studies by Wang et al. [24] in Changbai Mountains and Vorovencii [40] in Bucegi Mountains, demonstrated significant advantages due to their complementary roles. Spectral features capture the physiological and structural characteristics of vegetation, while terrain features provide spatial context of the ecological environment. This combination compensates for the limitations of individual features, particularly in complex terrains.
However, adding texture and SAR features resulted in decreased classification accuracy, consistent with Fu et al. [22], who observed similar trends in the complex mountainous region of Jiuzhaigou. These features were found to have less importance than spectral features in the classification of vegetation types in mountain–plain transition zones, leading to data redundancy and increased noise. Additionally, vegetation indices showed minimal improvement in classification accuracy, aligning with findings from Mohammadpour et al. [73], Pouteau et al. [74], and Silveira et al. [75].
As the number of features increased, the classification accuracy of the MLP and RF algorithms generally decreased, whereas 1D-CNN exhibited the opposite trend. This finding aligns with Bai et al. [7], who demonstrated the suitability of 1D-CNN for handling high-dimensional data, with classification accuracy improving as more features were included.
This study revealed that, consistent with the findings of Fu et al. [22] and Chen et al. [76], the RF-RFE algorithm outperformed the ReliefF algorithm in feature selection for vegetation type recognition in mountain–plain transition zones [22,76]. Features such as DEM, Slope, B12_W1, ME_B3_S7, MNDVI_S1, NDBI_W1, NDBI_C1, and B6_S7 appeared in both feature selection methods, indicating that terrain features, spectral bands, vegetation indices, and texture features hold significant advantages in vegetation classification. In contrast, SAR features were retained only in the ReliefF algorithm and not in RF-RFE. This is because the ReliefF algorithm emphasizes identifying features based on their ability to distinguish classes, and SAR features provide supplemental information on structure, moisture, and seasonal changes, with strong distinguishing capabilities but a relatively low overall contribution rate (41.65%). On the other hand, RF-RFE selects features based on their weights, which led to the exclusion of SAR features [77].

5.2. Discussion on Classification Performance of Different Machine Learning Algorithms

In this study, 1D-CNN, MLP, and RF machine learning algorithms were employed to classify vegetation types in the mountain–plain transition zone of Mianzhu City. The results show that all three algorithms effectively distinguished shrubland, EBF, DBF, CF, grassland, and cropland, achieving satisfactory classification performance. However, the accuracy of EBF classification was slightly lower for all three methods, with frequent misclassification as shrubland or DBF. This issue is primarily attributed to the following reasons. Spectral similarity: EBF and DBF exhibit similar spectral reflectance characteristics in winter due to leaf senescence or reduced photosynthesis. In sparse vegetation areas, EBF and shrubland may also display comparable reflectance properties [78]. Minor differences in terrain features: Shrubland and EBF share similar distribution patterns, particularly along riverbanks, making it difficult to differentiate between them based solely on terrain features [79]. However, there are limitations of remote sensing resolutions. Notably, the relatively low spatial resolution (10m) of the remote sensing imagery used makes it challenging to clearly distinguish EBF from adjacent shrubland or DBF, especially in mixed-pixel areas [80].
Comparing the classification accuracy of 1D-CNN, MLP, and RF in the mountain–plain transition zone, MLP achieved the highest OA of 81.65%, followed by 1D-CNN (OA: 80.41%), with RF’s value being slightly lower at 79.97% [39]. By enlarging two representative areas within the study region for localized analysis, it was observed that 1D-CNN and RF produced similar classification results, frequently confusing shrubland and EBF. In contrast, MLP’s results were more consistent with the actual vegetation distribution. The vegetation types in the mountain–plain transition zone exhibit substantial similarity and mixing. MLP’s strong ability to capture global features allows it to better handle this complex, continuously transitioning data [42]. In comparison, RF and 1D-CNN are more inclined to extract local features, which may limit their performance when dealing with features with strong global correlation or mixing.
Compared with other studies, these algorithms not only outperform traditional methods (such as SVM and GTB) in classification accuracy but also successfully address the complex terrain conditions and vegetation distribution patterns in mountain–plain transition zones by combining global feature extraction, local feature extraction, and noise resistance. These advantages provide valuable support for extending their application to vegetation classification in other similar regions.

5.3. Discussion on Study Limitations and Prospects

This study did not include a mixed-forest label, which introduces certain limitations in classifying mixed forests. Future research will consider adding samples for mixed forests (e.g., coniferous–broadleaf mixed forests and evergreen–deciduous broadleaf mixed forests) and redesigning the model’s architecture. Moreover, this study evaluated only three machine learning models. Future studies will consider incorporating more algorithms for evaluation and introducing attention mechanisms to improve model accuracy and reduce computational costs.
Additionally, this study utilized only four images from Sentinel-1 and Sentinel-2 (two from summer and two from winter; see Section 2.2 for details). The limited dataset may affect the generalizability of the results and restrict the ability to capture intra-seasonal variability. Future research will aim to include more temporal data to improve the robustness and accuracy of vegetation classification in mountain–plain transition zones. Additionally, mountain–plain transition zones are highly affected by cloudy and rainy weather, making it challenging to construct a complete cloud-free image time series using Sentinel-2A or other satellite remote sensing imagery. Future research will consider adopting time-series fusion methods or integrating alternative data sources into the classification process. Furthermore, other datasets, such as tree height and phenological features, will be considered for integration to further improve classification performance.
In terms of accuracy assessment, this study employed the Kappa coefficient to characterize the classification of vegetation types, referencing its widespread use in remote sensing studies. However, research by Pontius and Millones (2011) [81] has highlighted the limitations of the Kappa coefficient in practical applications, showing that its effectiveness is moderate and its interpretation can be ambiguous. Future research will adopt simpler and more informative summary parameters, such as quantity disagreement and allocation disagreement, to summarize the confusion matrix and improve the precision of accuracy assessments.
In terms of its specific applicable regions, this study is particularly applicable to regions characterized by stepped geomorphic structures, pronounced elevation gradients, and diverse vegetation types influenced by both natural and anthropogenic factors. These regions typically feature the coexistence of cropland, grassland, shrubland, and forests, such as EBF, DBF, and CF, distributed across various altitudinal zones. Nevertheless, we acknowledge that the methodology may require modifications for regions with markedly different characteristics. For example, areas dominated by unique vegetation types, such as tropical rainforests or arid desert vegetation, or regions with distinct climatic conditions or extreme elevation ranges, might necessitate additional feature engineering or algorithmic adjustments. Despite these potential limitations, the framework developed in this study provides a robust foundation for addressing similar challenges in regions with comparable geographical contexts worldwide.

6. Conclusions

This study screened key features with high contribution rates using different feature combinations and feature selection algorithms, and applied 1D-CNN, MLP, and RF algorithms to classify and predict vegetation types in a mountain–plain transition zone. There are three main conclusions. First, multi-feature combinations significantly improved model classification accuracy. Compared with single features, combining multiple features effectively enhanced classification accuracy. Among all algorithms, the combination of spectral and topographic features performed the best. Although the inclusion of indices, texture features, and SAR features expanded the data dimensions, the increase in data redundancy led to an overall decrease in classification accuracy. Second, RF-RFE is the optimal feature selection algorithm. In this study, the RF-RFE algorithm demonstrated excellent performance, ultimately identifying eighteen key features across four feature types. This selection strategy effectively reduced data redundancy and significantly improved the model’s classification accuracy. Third, all three algorithms performed well, but the MLP algorithm outperformed the others. In vegetation type classification, 1D-CNN, MLP, and RF algorithms all showed strong classification capabilities. Among them, the MLP algorithm achieved the highest OA of 81.65% and a Kappa coefficient of 77.75%. It exhibited outstanding performance, particularly in classifying shrubland, evergreen broadleaf forests, and grasslands. This study demonstrates that integrating remote sensing data with machine learning enables efficient and accurate vegetation classification in mountain–plain transition zones. The proposed method provides reliable technical support for ecological restoration, conservation planning, and sustainable land-use decision-making. By significantly reducing the cost and time of manual surveys, minimizing data redundancy, and lowering computational requirements, the method proves highly practical for regional management and highlights its potential for large-scale ecological assessments and policy development.

Author Contributions

Conceptualization, W.B. and Z.H.; methodology, W.B., Z.H. and S.W.; software, W.B., L.L. and S.W.; validation, W.B., X.W. and T.Z.; formal analysis, W.B. and T.Z.; investigation, X.W.; resources, Z.H. and L.H.; data curation, W.B. and X.W.; writing—original draft preparation, W.B. and T.Z.; writing—review and editing, Y.T., Z.H. and G.M.R.; visualization, W.B. and X.W.; supervision, Y.T., G.M.R. and Z.H.; project administration, Z.H. and L.H.; funding acquisition, Y.T. and G.M.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Natural Science Foundation of China (Grant No. 42301456), the Natural Science Foundation of Sichuan Province (Grant No. 2025ZNSFSC0321), the China Scholarship Council (Grant Nos. 202308510293 and 202308210362), the Independent Research Project of the State Key Laboratory of Geohazard Prevention and Geo-environment Protection (Grant No. SKLGP2022Z017), and the Australian Research Council Discovery Project (Grant No. DP230103060). Finally, we would like to thank the anonymous reviewers and the editors for their helpful comments that improved the manuscript substantially.

Data Availability Statement

The data supporting the results of this study are available upon request from the first author Wenqian Bai.

Acknowledgments

We thank the Plant Data Center of the Chinese Academy of Sciences (https://www.plantplus.cn) for providing data support. We also thank the Natural Resources and Planning Bureau of Mianzhu City and Sichuan Institute of Geological Surveying and Mapping Co., Ltd. for their data support.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Six datasets were utilized in this study, covering multidimensional surface characteristics. The table below presents the source, acquisition time, spatial resolution, validation methods, and consistency analysis for each dataset.
Table A1. Accuracy and consistency analysis of datasets.
Table A1. Accuracy and consistency analysis of datasets.
Data TypeData SourceAcquisition TimeSpatial ResolutionAccuracy MetricsValidation MethodsConsistency Analysis
Third National Land SurveyNatural Resources and Planning Bureau of Mianzhu City2018–2019-Classification accuracy: >90%; geometric accuracy: ±0.3 mInternal and external checks, and hierarchical reviewHigh accuracy and matches other datasets effectively
Second-Class Forest SurveyNatural Resources and Planning Bureau of Mianzhu City2019-Classification accuracy: 85–90%; area error: <2%Sample plot validation and remote sensing verificationProvides precise local information, consistent with vegetation map
The Vegetation Map of China (1:1,000,000)CAS Vegetation Map Editorial Committee, 20212019–2020-Classification accuracy: 85–90%Field survey and remote sensing interpretationSuitable for macro-scale analysis and consistent with other datasets
Sentinel-2 L2AESA Sentinel-2A/B Satellites201910 m/20 mGeometric accuracy: ±10 m; radiometric calibration: ±5%Ground control points (GCPs), spectral validation, and confusion matrixHigh resolution and accuracy, and consistent with study needs
Sentinel-1 SARESA Sentinel-1A/B Satellites201910 mGeometric accuracy: ±10m; classification accuracy: 70–90%DEM correction and field radar scatter validationFills optical gaps and is vital for soil moisture analysis
ASTER GDEMNASA and METI201130 mVertical accuracy: ±20 m (flat); ±30 m (mountainous)Comparison with LiDAR and GPS RTK elevation dataProvides terrain information and is consistent with other datasets in terrain analysis

References

  1. Zhou, Y.W. Spatial-Temporal Variation Of Soil Moisture And Its Influence On Erosion And Sediment Yield In A Mountainous Watershed. Ph.D. Thesis, Huazhong Agricultural University, Wuhan, China, 2023. [Google Scholar]
  2. Wang, J.Z.Y.; Zhang, J.; Peng, W.P.; Zhang, W. Soil moisture inversion in the transition zone of mountainous plain based on temperature plant-drought index method. Res. Soil Water Conserv. 2018, 25, 151–156+161. [Google Scholar]
  3. Zhang, D.H.R.; Gao, X.S. Spatial Characteristics and Potential Ecological Risk Factors of Heavy Metalsin Cultivated Land in the Transition Zone of a Mountain Plain. Environ. Sci. 2022, 43, 946–956. [Google Scholar]
  4. Ma, C.; Cui, Z.Z.; Li, T.T.; PENG, Y.Z. Spatio-temporal variation and its response to climate change of NDVI in the terrain transition zone, China. Acta Ecol. Sin. 2023, 43, 2141–2157. [Google Scholar]
  5. Liu, Y.G.; Wang, L.; Lu, Y.F.; Zou, Q.; Yang, L.; He, Y.; Gao, W.J.; Li, Q. Identification and optimization methods for delineating ecological red lines in Sichuan Province of southwest China. Ecol. Indic. 2023, 146, 109786. [Google Scholar] [CrossRef]
  6. Sun, M.Y.; Zhang, L.; Yang, R.J.; Li, X.H.; Zhang, Y.Y.; Lu, Y.R. Construction of an integrated framework for assessing ecological security and its application in Southwest China. Ecol. Indic. 2023, 148, 110074. [Google Scholar] [CrossRef]
  7. Bai, M.Y.; Peng, P.H.; Zhang, S.Q.; Wang, X.M.; Wang, X.; Wang, J.; Pellikka, P. Mountain Forest Type Classification Based on One-Dimensional Convolutional Neural Network. Forests 2023, 14, 1823. [Google Scholar] [CrossRef]
  8. Wang, B.G.; Yao, Y.H. Mountain Vegetation Classification Method Based on Multi-Channel Semantic Segmentation Model. Remote Sens. 2024, 16, 256. [Google Scholar] [CrossRef]
  9. Estel, S.; Kuemmerle, T.; Alcántara, C.; Levers, C.; Prishchepov, A.; Hostert, P. Mapping farmland abandonment and recultivation across Europe using MODIS NDVI time series. Remote Sens. Environ. 2015, 163, 312–325. [Google Scholar] [CrossRef]
  10. Wei, Y.; Wang, W.; Tang, X.; Li, H.; Hu, H.; Wang, X. Classification of Alpine Grasslands in Cold and High Altitudes Based on Multispectral Landsat-8 Images: A Case Study in Sanjiangyuan National Park, China. Remote Sens. 2022, 14, 3714. [Google Scholar] [CrossRef]
  11. Grabska, E.; Hostert, P.; Pflugmacher, D.; Ostapowicz, K. Forest Stand Species Mapping Using the Sentinel-2 Time Series. Remote Sens. 2019, 11, 1197. [Google Scholar] [CrossRef]
  12. Qi, W.J.; Yang, X.M. Mountain and Plain Vegetation Boundaries Extraction in Duchang County Province Jiangxi. J. Geo-Inf. Sci. 2017, 19, 559–569. [Google Scholar]
  13. Zangerl, U.; Haselberger, S.; Kraushaar, S. Classifying Sparse Vegetation in a Proglacial Valley Using UAV Imagery and Random Forest Algorithm. Remote Sens. 2022, 14, 4919. [Google Scholar] [CrossRef]
  14. Hu, J.M.; Shean, D. Improving Mountain Snow and Land Cover Mapping Using Very-High-Resolution (VHR) Optical Satellite Images and Random Forest Machine Learning Models. Remote Sens. 2022, 14, 4227. [Google Scholar] [CrossRef]
  15. Kou, W.L.; Liang, C.X.; Wei, L.L.; Hernandez, A.J.; Yang, X.J. Phenology-Based Method for Mapping Tropical Evergreen Forests by Integrating of MODIS and Landsat Imagery. Forests 2017, 8, 34. [Google Scholar] [CrossRef]
  16. Li, H.K.; Wang, L.J.; Xiao, S.S. Random forest classification of land use in hilly and mountaineous areas of southern China using multi-source remote sensing data. Trans. Chin. Soc. Agric. Eng. 2021, 37, 244–251. [Google Scholar]
  17. Wakulińska, M.; Marcinkowska-Ochtyra, A. Multi-Temporal Sentinel-2 Data in Classification of Mountain Vegetation. Remote Sens. 2020, 12, 2696. [Google Scholar] [CrossRef]
  18. Belgiu, M.; Csillik, O. Sentinel-2 cropland mapping using pixel-based and object-based time-weighted dynamic time warping analysis. Remote Sens. Environ. 2018, 204, 509–523. [Google Scholar] [CrossRef]
  19. Ozturk, M.Y.; Colkesen, I. A novel hybrid methodology integrating pixel- and object-based techniques for mapping land use and land cover from high-resolution satellite data. Int. J. Remote Sens. 2024, 45, 5640–5678. [Google Scholar] [CrossRef]
  20. Li, S.; Fei, X.; Chen, P.L.; Wang, Z.; Gao, Y.J.; Cheng, K.; Wang, H.L.; Zhang, Y.Z. Self-Adaptive-Filling Deep Convolutional Neural Network Classification Method for Mountain Vegetation Type Based on High Spatial Resolution Aerial Images. Remote Sens. 2024, 16, 31. [Google Scholar] [CrossRef]
  21. Rose, M.B.; Mills, M.; Franklin, J.; Larios, L. Mapping Fractional Vegetation Cover Using Unoccupied Aerial Vehicle Imagery to Guide Conservation of a Rare Riparian Shrub Ecosystem in Southern California. Remote Sens. 2023, 15, 5113. [Google Scholar] [CrossRef]
  22. Fu, X.L.; Zhou, W.Z.; Zhou, X.Y.; Li, F.; Hu, Y.C. Classifying Mountain Vegetation Types Using Object-Oriented Machine Learning Methods Based on Different Feature Combinations. Forests 2023, 14, 1624. [Google Scholar] [CrossRef]
  23. Vorovencii, I.; Dincă, L.; Crisan, V.; Postolache, R.G.; Codrean, C.L.; Catalin, C.; Gresita, C.I.; Chima, S.; Gavrilescu, I. Local-scale mapping of tree species in a lower mountain area using Sentinel-1 and-2 multitemporal images, vegetation indices, and topographic information. Front. For. Glob. Change 2023, 6, 1220253. [Google Scholar] [CrossRef]
  24. Wang, M.C.; Li, M.J.; Wang, F.Y.; Ji, X. Exploring the Optimal Feature Combination of Tree Species Classification by Fusing Multi-Feature and Multi-Temporal Sentinel-2 Data in Changbai Mountain. Forests 2022, 13, 1058. [Google Scholar] [CrossRef]
  25. Zhao, Y.F.; Zhu, W.W.; Wei, P.P.; Fang, P.; Zhang, X.W.; Yan, N.N.; Liu, W.J.; Zhao, H.; Wu, Q.R. Classification of Zambian grasslands using random forest feature importance selection during the optimal phenological period. Ecol. Indic. 2022, 135, 108529. [Google Scholar] [CrossRef]
  26. Cao, Q.Y.; Li, M.; Yang, G.B.; Tao, Q.; Luo, Y.P.; Wang, R.R.; Chen, P.F. Urban Vegetation Classification for Unmanned Aerial Vehicle Remote Sensing Combining Feature Engineering and Improved DeepLabV3+. Forests 2024, 15, 382. [Google Scholar] [CrossRef]
  27. Gao, G.L.; Du, H.Q.; Han, N.; Xu, X.J.; Sun, S.B.; Li, X.J. Mapping of Moso Bamboo Forest Using Object-Based Approach Based on the Optimal Features. Sci. Silvae Sin. 2016, 52, 77–85. [Google Scholar]
  28. Jiang, Y.F.; Qi, J.G.; Chen, B.W.; Yan, M.; Huang, L.J.; Zhang, L. Classification of Mangrove Species with UAV Hyperspectral Imagery and Machine Learning Methods. Remote Sens. Technol. Appl. 2021, 36, 1416–1424. [Google Scholar]
  29. Gumma, M.K.; Panjala, P.; Teluguntla, P. Mapping heterogeneous land use/land cover and crop types in Senegal using sentinel-2 data and machine learning algorithms. Int. J. Digit. Earth 2024, 17, 1. [Google Scholar]
  30. Du, Z.M.; Ma, W.M.; Zhou, Q.P.; Chen, H.; Deng-Zeng, Z.M.; Liu, J.Q. Research progress of vegetation recognition methods based on remote sensing technology. Ecol. Sci. 2022, 41, 222–229. [Google Scholar]
  31. Yang, C.; Wu, G.F.; Li, Q.G.; Wang, J.L.; Qu, L.Q.; Ding, K. Research Progress on Remote Sensing Classification of Vegetation. Geogr. Geo-Inf. Sci. 2018, 34, 24–32. [Google Scholar]
  32. Belgiu, M.; Drăguţ, L. Random forest in remote sensing: A review of applications and future directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
  33. Sheykhmousa, M.; Mahdianpari, M.; Ghanbari, H.; Mohammadimanesh, F.; Ghamisi, P.; Homayouni, S. Support Vector Machine Versus Random Forest for Remote Sensing Image Classification: A Meta-Analysis and Systematic Review. IEEE J-STARS 2020, 13, 6308–6325. [Google Scholar] [CrossRef]
  34. Wang, Y.C.; Wan, H.W.; Gao, J.X.; Hu, Z.W.; Sun, C.X.; Lu, N.; Zhang, Z.R. Identification of common native grassland plants in northern China using deep learning. Biodivers. Sci. 2024, 32, 23435. [Google Scholar] [CrossRef]
  35. Wan, Y.L.; Gong, Y.Q.; Xu, F.; Shi, W.Z.; Gao, W. A novel red-edge vegetable index for paddy rice mapping based on Sentinel-1/2 and GF-6 images. Int. J. Digit. Earth 2024, 17, 2398068. [Google Scholar] [CrossRef]
  36. She, B.; Hu, J.T.; Huang, L.S.; Zhu, M.Q.; Yin, Q.S. Mapping Soybean Planting Areas in Regions with Complex Planting Structures Using Machine Learning Models and Chinese GF-6 WFV Data. Agriculture 2024, 14, 231. [Google Scholar] [CrossRef]
  37. Li, F.; Li, B.; Yan, H.; Li, H.Q.; Lyu, P.F.; Bai, H.H. Advances and Prospects of Grassland Remote Sensing Research. Chin. J. Grassl. 2022, 44, 87–99. [Google Scholar]
  38. Hsieh, T.H.; Kiang, J.F. Comparison of CNN Algorithms on Hyperspectral Image Classification in Agricultural Lands. Sensors 2020, 20, 1734. [Google Scholar] [CrossRef] [PubMed]
  39. Zhang, H.K.; Roy, D.P.; Luo, D. Demonstration of large area land cover classification with a one-dimensional convolutional neural network applied to single pixel temporal metric percentiles. Remote Sens. Environ. 2023, 295, 113653. [Google Scholar] [CrossRef]
  40. Vorovencii, I. Assessing Various Scenarios of Multitemporal Sentinel-2 Imagery, Topographic Data, Texture Features, and Machine Learning Algorithms for Tree Species Identification. IEEE J-STARS 2024, 17, 15373–15392. [Google Scholar]
  41. Zhang, S.Q.; Peng, P.H.; Bai, M.Y.; Wang, X.; Zhang, L.F.; Hu, J.; Wang, M.L.; Wang, X.M.; Wang, J.; Zhang, D.H.; et al. Vegetation Subtype Classification of Evergreen Broad-Leaved Forests in Mountainous Areas Using a Hierarchy-Based Classifier. Remote Sens. 2023, 15, 3053. [Google Scholar] [CrossRef]
  42. He, X.; Chen, Y.S. Modifications of the Multi-Layer Perceptron for Hyperspectral Image Classification. Remote Sens. 2021, 13, 3547. [Google Scholar] [CrossRef]
  43. Kussul, N.; Lavreniuk, M.; Skakun, S.; Shelestov, A. Deep Learning Classification of Land Cover and Crop Types Using Remote Sensing Data. IEEE Geosci. Remote Sens. Lett. 2017, 14, 778–782. [Google Scholar] [CrossRef]
  44. Lei, G.B.; Li, A.N.; Bian, J.H.; Zhang, Z.J.; Zhang, W.; Wu, B.F. An practical method for automatically identifying the evergreen and deciduous characteristic of forests at mountainous areas: A case study in Mt.Gongga Region. Acta Ecol. Sin. 2014, 34, 7210–7221. [Google Scholar]
  45. Lei, G.B.; Li, A.N.; Tan, J.B.; Zhang, Z.J.; Bian, J.H.; Jin, H.A.; Zhang, W.; Cao, X.M. Forest Types Mapping in Mountainous Area Using Multi-source and Multi-temporal Satellite Images and Decision Tree Models. Remote Sens. Technol. Appl. 2016, 31, 31–41. [Google Scholar]
  46. Liang, H.Z.; Liu, L.L.; Fu, T.G.; Gao, H.; Li, M.; Liu, J.T. Vertical distribution of vegetation in mountain regions: A review based on bibliometrics. Chin. J. Eco-Agric. 2022, 30, 1077–1090. [Google Scholar]
  47. Tian, L.; Zhou, L.; Sun, J.R.; Zong, H. Spatial distribution of tree-shrub patches and their diversity along the altitude gradient in the transition zone between the first and second steps of the northern Hengduan Mountains. Acta Ecol. Sin. 2023, 43, 10320–10333. [Google Scholar]
  48. Li, W.D.; Jiang, Y.; Li, B.; Tian, X.; Huang, Y.P.; Fu, J.X.; Li, Z. Intervention Intensity of Human Activity on Potential Natural Vegetation in the Hekouzhen-Longmen Region. Res. Soil Water Conserv. 2023, 30, 283–293. [Google Scholar]
  49. Li, X.Q.; Liang, Y.L. Land Use Response Mechanism Caused by Geological Disaster:Taking Mianzhu City As an Example. Northwest. Geol. 2022, 55, 236–248. [Google Scholar]
  50. Natural Resources and Planning Bureau of Mianzhu City. The Territorial Spatial Master Plan of Mianzhu City (2021–2035). Unpublished Report. Mianzhu, China, 2023. [Google Scholar]
  51. Natural Resources and Planning Bureau of Mianzhu City. Forest Resources Type II Survey Data. Report. Mianzhu, China, 2019. [Google Scholar]
  52. Natural Resources and Planning Bureau of Mianzhu City. Technical Regulations of the Third National Land Survey; Ministry of Natural Resources of the People’s Republic of China: Beijing, China, 2019.
  53. Zhang, X.S. Vegetation Map of the People’s Republic of China (1:1 000 000); Plant Data Center of Chinese Academy of Sciences: Beijing, China, 2021; p. 2021. [Google Scholar]
  54. Zhang, X.S. Vegetation Map of the People’s Republic of China (1:1 000 000); Geology Press: Beijing, China, 2007. [Google Scholar]
  55. Gasparovic, M.; Jogun, T. The effect of fusing Sentinel-2 bands on land-cover classification. Int. J. Remote Sens. 2018, 39, 822–841. [Google Scholar] [CrossRef]
  56. Islam, M.T.; Meng, Q.M. An exploratory study of Sentinel-1 SAR for rapid urban flood mapping on Google Earth Engine. Int. J. Appl. Earth Obs. Geoinf. 2022, 113, 103002. [Google Scholar]
  57. Bhogapurapu, N.; Dey, S.; Bhattacharya, A.; Mandal, D.; Lopez-Sanchez, J.M.; McNairn, H.; López-Martínez, C.; Rao, Y.S. Dual-polarimetric descriptors from Sentinel-1 GRD SAR data for crop growth assessment. ISPRS J. Photogramm. Remote Sens. 2021, 178, 20–35. [Google Scholar] [CrossRef]
  58. Haralick, R.M.; Shanmugam, K.; Dinstein, I. Textural Features for Image Classification. IEEE Trans. Syst. Man Cybern. 1973, SMC-3, 610–621. [Google Scholar] [CrossRef]
  59. Du, B.; Zhang, J.; Wang, Z.; Mao, D.; Zhang, M.; Wu, B. Crop Mapping based on Sentinel-2A NDVI Time Series Using Object-Oriented Classification and Decision Tree Model. J. Geo-Inf. Sci. 2019, 21, 740–751. [Google Scholar]
  60. Rouse, J.W.; Haas, R.H.; Schell, J.A.; Deering, D.W. Monitoring vegetation systems in the Great Plains with ERTS. Nasa Spec. Publ. 1974, 351, 309. [Google Scholar]
  61. Gitelson, A.A.; Kaufman, Y.J.; Merzlyak, M.N. Use of a green channel in remote sensing of global vegetation from EOS-MODIS. Remote Sens. Environ. 1996, 58, 289–298. [Google Scholar] [CrossRef]
  62. Huete, A.; Justice, C.; Van Leeuwen, W. MODIS vegetation index (MOD13). Algorithm Theor. Basis Doc. 1999, 3, 295–309. [Google Scholar]
  63. Kauth, R.; Thomas, G.S. The Tasselled Cap—A Graphic Description of the Spectral-Temporal Development of Agricultural Crops as Seen by Landsat; LARS Symposia: Salvador, Bahia, Brazil, 1976; p. 159. [Google Scholar]
  64. Zha, Y.; Gao, J.; Ni, S.X. Use of normalized difference built-up index in automatically mapping urban areas from TM imagery. Int. J. Remote Sens. 2003, 24, 583–594. [Google Scholar] [CrossRef]
  65. Jurgens, C. The modified normalized difference vegetation index (mNDVI) a new index to determine frost damages in agriculture based on Landsat TM data. Int. J. Remote Sens. 1997, 18, 3583–3594. [Google Scholar] [CrossRef]
  66. LeCun, Y.; Boser, B.; Denker, J.; Henderson, D.; Howard, R.; Hubbard, W.; Jackel, L. Handwritten digit recognition with a back-propagation network. Adv. Neural Inf. Process. Syst. 1989, 2, 396–404. [Google Scholar]
  67. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning representations by back-propagating errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
  68. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  69. Kononenko, I. Estimating Attributes: Analysis and Extensions of RELIEF; Bergadano, F., De Raedt, L., Eds.; Machine Learning: ECML-94, Berlin; Springer: Berlin/Heidelberg, Germany, 1994; pp. 171–182. [Google Scholar]
  70. Guyon, I.; Weston, J.; Barnhill, S.; Vapnik, V. Gene selection for cancer classification using support vector machines. Mach. Learn. 2002, 46, 389–422. [Google Scholar] [CrossRef]
  71. Lin, N.; Wang, W.; Wang, B. Extraction of planting information of navel orange orchards based on random forest and LANDSAT8 OLI images. Geospat. Inf. 2021, 19, 8–9. [Google Scholar]
  72. Townsend, J.T. Theoretical analysis of an alphabetic confusion matrix. Percept. Psychophys. 1971, 9, 40–50. [Google Scholar] [CrossRef]
  73. Mohammadpour, P.; Viegas, D.X.; Viegas, C. Vegetation mapping with random forest using sentinel 2 and GLCM texture feature—A case study for Lousã region, Portugal. Remote Sens. 2022, 14, 4585. [Google Scholar] [CrossRef]
  74. Pouteau, R.; Gillespie, T.W.; Birnbaum, P. Predicting tropical tree species richness from normalized difference vegetation index time series: The devil is perhaps not in the detail. Remote Sens. 2018, 10, 698. [Google Scholar] [CrossRef]
  75. Silveira, E.M.; Bueno, I.T.; Acerbi-Junior, F.W.; Mello, J.M.; Scolforo, J.R.S.; Wulder, M.A. Using spatial features to reduce the impact of seasonality for detecting tropical forest changes from Landsat time series. Remote Sens. 2018, 10, 808. [Google Scholar] [CrossRef]
  76. Chen, J.; Li, H.; Liu, Y.; Chang, Z.; Han, W.; Liu, S. Crops identification based on Sentinel-2 data with multi-feature optimization. Remote Sens. Nat. Resour. 2023, 35, 292–300. [Google Scholar]
  77. Zhou, Z.; Hu, X.S.; Feng, G.J.; Liu, C.Y.; Li, X.L. Integrated Ensemble Learning, Feature Selection, and Hyperparameter Optimization for Accurate Mapping of Typical Vegetation on Landslides in the Upper Reaches of the Yellow River Using Gaofen-2 Imagery. IEEE J-STARS 2024, 17, 15702–15720. [Google Scholar] [CrossRef]
  78. Huete, A.R.; Jackson, R.D. Soil and atmosphere influences on the spectra of partial canopies. Remote Sens. Environ. 1988, 25, 89–105. [Google Scholar] [CrossRef]
  79. Sánchez-Mercado, A.Y.; Ferrer-Paris, J.R.; Franklin, J. Mapping species distributions: Spatial inference and prediction. Oryx 2010, 44, 615. [Google Scholar] [CrossRef]
  80. Song, J.; Liu, X.L. Improving the accuracy of forest identification in mountainous areas from multi-source remote sensing data-the Sunan County section of Qilian Mountains National Park as an example. Acta Ecol. Sin. 2021, 30, 1–14. [Google Scholar]
  81. Pontius, R.G., Jr.; Millones, M. Death to Kappa: Birth of quantity disagreement and allocation disagreement for accuracy assessment. Int. J. Remote Sens. 2011, 32, 4407–4429. [Google Scholar] [CrossRef]
Figure 1. Location of study area.
Figure 1. Location of study area.
Land 14 00184 g001
Figure 2. Methodology of study (all factors and abbreviations are defined in text).
Figure 2. Methodology of study (all factors and abbreviations are defined in text).
Land 14 00184 g002
Figure 3. Architecture of 1D-CNN algorithm. Note: EBF stands for evergreen broadleaf forest, DBF for deciduous broadleaf forest, and CF for coniferous forest.
Figure 3. Architecture of 1D-CNN algorithm. Note: EBF stands for evergreen broadleaf forest, DBF for deciduous broadleaf forest, and CF for coniferous forest.
Land 14 00184 g003
Figure 4. Architecture of MLP algorithm.
Figure 4. Architecture of MLP algorithm.
Land 14 00184 g004
Figure 5. Architecture of RF algorithm.
Figure 5. Architecture of RF algorithm.
Land 14 00184 g005
Figure 6. Comparison of accuracy of machine learning algorithms in vegetation classification under different feature combinations.
Figure 6. Comparison of accuracy of machine learning algorithms in vegetation classification under different feature combinations.
Land 14 00184 g006
Figure 7. Vegetation mapping of Mianzhu City based on 1D-CNN, MLP, and RF.
Figure 7. Vegetation mapping of Mianzhu City based on 1D-CNN, MLP, and RF.
Land 14 00184 g007
Figure 8. Area and percentage of each vegetation type based on 1D-CNN, MLP, and RF algorithms.
Figure 8. Area and percentage of each vegetation type based on 1D-CNN, MLP, and RF algorithms.
Land 14 00184 g008
Figure 9. Misclassification between different forest types using three models: (a) 1D-CNN, (b) MLP, and (c) RF algorithms. (Note: “1” stands for shrubland; “2” stands for evergreen broadleaf forest (EBF); “3” stands for deciduous broadleaf forest (DBF); “4” stands for coniferous forest (CF); “5” stands for grassland; and “6” stands for cropland.)
Figure 9. Misclassification between different forest types using three models: (a) 1D-CNN, (b) MLP, and (c) RF algorithms. (Note: “1” stands for shrubland; “2” stands for evergreen broadleaf forest (EBF); “3” stands for deciduous broadleaf forest (DBF); “4” stands for coniferous forest (CF); “5” stands for grassland; and “6” stands for cropland.)
Land 14 00184 g009
Figure 10. Comparison of localized vegetation classification results using 1D-CNN, MLP, and RF algorithms.
Figure 10. Comparison of localized vegetation classification results using 1D-CNN, MLP, and RF algorithms.
Land 14 00184 g010
Table 1. Sample counts by vegetation type.
Table 1. Sample counts by vegetation type.
TypesTotalTraining (60%)Validation (20%)Test (20%)
Shrubland5751345011501151
EBF43392603868868
DBF48982939980979
CF5134308010271027
Grassland26471588529530
Cropland5764345911531152
Total28,53317,11957075707
Table 2. Selected feature sets and descriptions.
Table 2. Selected feature sets and descriptions.
Feature Sets (Number of Features)Descriptions
Topographic feature (3)The elevation data were derived from the SRTM. Afterwards the slope and aspect factors were added.
Texture features (30)
  • The mean, entropy, contrast homogeneity, and correlation texture features were calculated separately for each of the top three principal components (PC1, PC2, and PC3) through GLCM [58].
  • All texture features are for both summer and winter.
Spectral features (47)
  • VV band and VH bands from Sentinel-1.
  • Three visible bands, four red-edge vegetation bands, one near-infrared band, one water vapor, and two short-wave infrared bands from Sentinel-2.
  • Normalized difference vegetation index (NDVI) [59,60], green normalized difference vegetation index (GNDVI) [61], enhanced vegetation index (EVI) [62], forest discrimination index (FDI) [63], normalized difference built-up index (NDBI) [64], and modified normalized difference vegetation index (MNDVI) [65].
  • All these spectral features are for both summer and winter.
Table 3. List of all features.
Table 3. List of all features.
SiteFeature NameData SourceSiteFeature NameData Source
1ElevationSRTM41Red-Edge 1 (summer)Sentinel-2
2SlopeSRTM42Red-Edge 2 (summer)Sentinel-2
3AspectSRTM43Red-Edge 3 (summer)Sentinel-2
4PC1_CO (summer)Sentinel-244NIR (summer)Sentinel-2
5PC1_EN (summer)Sentinel-245Water Vapor (summer)Sentinel-2
6PC1_HO (summer)Sentinel-246Red Edge 4 (summer)Sentinel-2
7PC1_ME (summer)Sentinel-247SWIR 1 (summer)Sentinel-2
8PC1_COR (summer)Sentinel-248SWIR 2 (summer)Sentinel-2
9PC2_CO (summer)Sentinel-249NDVI (summer)Sentinel-2
10PC2_EN (summer)Sentinel-250GNDVI (summer)Sentinel-2
11PC2_HO (summer)Sentinel-251EVI (summer)Sentinel-2
12PC2_ME (summer)Sentinel-252RVI (summer)Sentinel-2
13PC2_COR (summer)Sentinel-253FDI (summer)Sentinel-2
14PC3_CO (summer)Sentinel-254NDBI (summer)Sentinel-2
15PC3_EN (summer)Sentinel-255MNDVI (summer)Sentinel-2
16PC3_HO (summer)Sentinel-256Blue (winter)Sentinel-2
17PC3_ME (summer)Sentinel-257Green (winter)Sentinel-2
18PC3_COR (summer)Sentinel-258Red (winter)Sentinel-2
19PC1_CO (winter)Sentinel-259Red-Edge 1 (winter)Sentinel-2
20PC1_EN (winter)Sentinel-260Red-Edge 2 (winter)Sentinel-2
21PC1_HO (winter)Sentinel-261Red-Edge 3 (winter)Sentinel-2
22PC1_ME (winter)Sentinel-262NIR (winter)Sentinel-2
23PC1_COR (winter)Sentinel-263Water Vapor (summer)Sentinel-2
24PC2_CO (winter)Sentinel-264Red-Edge 4 (winter)Sentinel-2
25PC2_EN (winter)Sentinel-265SWIR 1 (winter)Sentinel-2
26PC2_HO (winter)Sentinel-266SWIR 2 (winter)Sentinel-2
27PC2_ME (winter)Sentinel-267NDVI (winter)Sentinel-2
28PC2_COR (winter)Sentinel-268GNDVI (winter)Sentinel-2
29PC3_CO (winter)Sentinel-269EVI (winter)Sentinel-2
30PC3_EN (winter)Sentinel-270RVI (winter)Sentinel-2
31PC3_HO (winter)Sentinel-271FDI (winter)Sentinel-2
32PC3_ME (winter)Sentinel-272NDBI (winter)Sentinel-2
33PC3_COR (winter)Sentinel-273MNDVI (winter)Sentinel-2
34VV (summer)Sentinel-174NDVI differenceSentinel-2
35VH (summer)Sentinel-175GNDVI differenceSentinel-2
36VV (winter)Sentinel-176EVI differenceSentinel-2
37VH (winter)Sentinel-177RVI differenceSentinel-2
38Blue (summer)Sentinel-278FDI differenceSentinel-2
39Green (summer)Sentinel-279NDBI differenceSentinel-2
40Red (summer)Sentinel-280MNDVI differenceSentinel-2
Table 4. Number of different feature combinations and model runtime.
Table 4. Number of different feature combinations and model runtime.
CodeFeature CombinationFeature CombinationTime(s)
1D-CNNMLPRF
F1Spectral22207.62270.6258.88
F2Vegetation Indices211116.0643.3554.16
F3Texture301212.61183.5117.29
F4Terrain3177.9215.4412.07
F5Sentinel-1 SAR4215.6317.0922.25
F6Spectral + Vegetation Indices43779.21211.8257.41
F7Spectral + Texture52423.85167.6834.73
F8Spectral + Terrain25198.73228.7853.11
F9Spectral + Sentinel-1 SAR26218.78365.9756.91
F10Spectral + Vegetation Indices + Texture73591.9229.2540.79
F11Spectral + Vegetation Indices + Terrain46313.49203.3453.45
F12Spectral + Texture + Terrain55499.83501.334.17
F13Spectral + Terrain + Texture + Vegetation Indices76593.08296.0840.44
F14Spectral + Terrain + Texture + Vegetation Indices + Sentinel-1 SAR80532.58384.3641.9
Table 5. Features retained by RF-RFE and ReliefF algorithms and their weights. Note: RF-RFE stands for Recursive Feature Elimination based on Random Forest.
Table 5. Features retained by RF-RFE and ReliefF algorithms and their weights. Note: RF-RFE stands for Recursive Feature Elimination based on Random Forest.
OrderRF-RFEReliefF
FeatureWeights (%)FeatureWeights (%)
1dem26.05Dem28.07
2slope14.79Slope15.99
3b11_s710.30mndvi_s110.04
4b12_w16.34me_b3_s76.95
5b5_s76.19b12_w16.45
6b10_w15.59ndvi_w14.97
7ndbi_s13.62b11_w14.45
8me_b3_s73.47ndbi_w13.68
9mndvi_s13.10b6_s73.60
10me_b2_s72.80Aspect3.14
11b12_s72.79b8_w12.37
12ndbi_w12.51ndvi_s12.07
13b5_w12.34rvi_s12.03
14b3_w12.33ndbi_c11.97
15b2_s72.29W1VH1.11
16ndbi_c12.02S8VV1.08
17b7_w11.84W1VV1.08
18b6_s71.60S8VH0.96
Table 6. Comparison of classifier accuracy between RF-RFE and ReliefF feature selection algorithms.
Table 6. Comparison of classifier accuracy between RF-RFE and ReliefF feature selection algorithms.
Algorithm1D-CNNMLPRF
Accuracy (%)Time (s)Accuracy (%)Time (s)Accuracy (%)Time (s)
RF-RFE (18)80.41703.8681.6542.1379.9749.2
ReliefF (18)76.68395.7179.3190.1278.0351.35
Table 7. Comparison of vegetation classification accuracy for different classifiers using optimal feature combination.
Table 7. Comparison of vegetation classification accuracy for different classifiers using optimal feature combination.
Types1D-CNNMLPRF
PA (%)UA (%)PA (%)UA (%)PA (%)UA (%)
Shrubland88.2176.790.1877.9982.9577.59
EBF51.8971.7361.5669.0554.670.36
DBF80.2571.2375.4773.2277.3370.39
CF78.9783.0878.9886.6782.6479.72
Grassland75.8491.0188.1685.6783.8887.01
Cropland96.7390.7891.819693.5993.59
OA80.4181.6579.97
AA78.6581.0379.16
Kappa76.1377.7575.7
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bai, W.; He, Z.; Tan, Y.; Robinson, G.M.; Zhang, T.; Wang, X.; He, L.; Li, L.; Wu, S. Vegetation Classification in a Mountain–Plain Transition Zone in the Sichuan Basin, China. Land 2025, 14, 184. https://doi.org/10.3390/land14010184

AMA Style

Bai W, He Z, Tan Y, Robinson GM, Zhang T, Wang X, He L, Li L, Wu S. Vegetation Classification in a Mountain–Plain Transition Zone in the Sichuan Basin, China. Land. 2025; 14(1):184. https://doi.org/10.3390/land14010184

Chicago/Turabian Style

Bai, Wenqian, Zhengwei He, Yan Tan, Guy M. Robinson, Tingyu Zhang, Xueman Wang, Li He, Linlong Li, and Shuang Wu. 2025. "Vegetation Classification in a Mountain–Plain Transition Zone in the Sichuan Basin, China" Land 14, no. 1: 184. https://doi.org/10.3390/land14010184

APA Style

Bai, W., He, Z., Tan, Y., Robinson, G. M., Zhang, T., Wang, X., He, L., Li, L., & Wu, S. (2025). Vegetation Classification in a Mountain–Plain Transition Zone in the Sichuan Basin, China. Land, 14(1), 184. https://doi.org/10.3390/land14010184

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop