Next Article in Journal
A New Deep Learning Neural Network Model for the Identification of InSAR Anomalous Deformation Areas
Next Article in Special Issue
Tree Species Diversity Mapping—Success Stories and Possible Ways Forward
Previous Article in Journal
Absolute Frequency Readout of Cavity against Atomic Reference
Previous Article in Special Issue
Mapping the Abundance of Multipurpose Agroforestry Faidherbia albida Trees in Senegal
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Combination of Sentinel-1 and Sentinel-2 Data for Tree Species Classification in a Central European Biosphere Reserve

1
Institute of Geomatics, University of Natural Resources and Life Sciences, Vienna (BOKU), Peter-Jordan-Straße 82, 1190 Vienna, Austria
2
Department of Geodesy and Geoinformation, Vienna University of Technology, 1040 Vienna, Austria
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(11), 2687; https://doi.org/10.3390/rs14112687
Submission received: 9 May 2022 / Revised: 25 May 2022 / Accepted: 1 June 2022 / Published: 3 June 2022
(This article belongs to the Special Issue Mapping Tree Species Diversity)

Abstract

:
Microwave and optical imaging methods react differently to different land surface parameters and, thus, provide highly complementary information. However, the contribution of individual features from these two domains of the electromagnetic spectrum for tree species classification is still unclear. For large-scale forest assessments, it is moreover important to better understand the domain-specific limitations of the two sensor families, such as the impact of cloudiness and low signal-to-noise-ratio, respectively. In this study, seven deciduous and five coniferous tree species of the Austrian Biosphere Reserve Wienerwald (105,000 ha) were classified using Breiman’s random forest classifier, labeled with help of forest enterprise data. In nine test cases, variations of Sentinel-1 and Sentinel-2 imagery were passed to the classifier to evaluate their respective contributions. By solely using a high number of Sentinel-2 scenes well spread over the growing season, an overall accuracy of 83.2% was achieved. With ample Sentinel-2 scenes available, the additional use of Sentinel-1 data improved the results by 0.5 percentage points. This changed when only a single Sentinel-2 scene was supposedly available. In this case, the full set of Sentinel-1-derived features increased the overall accuracy on average by 4.7 percentage points. The same level of accuracy could be obtained using three Sentinel-2 scenes spread over the vegetation period. On the other hand, the sole use of Sentinel-1 including phenological indicators and additional features derived from the time series did not yield satisfactory overall classification accuracies (55.7%), as only coniferous species were well separated.

Graphical Abstract

1. Introduction

The ongoing species loss and the continued degradation of many terrestrial ecosystems make it increasingly important to monitor changes on the Earth’s surface on a large scale, with high accuracy and low latency [1]. The multispectral image data generated by the Sentinel-2 (S2) twin satellites are provided free of charge through the European Copernicus program. These data provide a great opportunity to monitor the entire Earth’s surface with high spatial and spectral as well as temporal resolution [2,3]. Several studies have already shown that the use of multispectral imagery generates highly informative data for land cover and tree species classification [4,5]. Further improvements in classification accuracy can be achieved by using multispectral time series [6,7,8].
While the high 5-day-temporal resolution of the S2 satellites leads to dense time series, it is not guaranteed that areas larger than 103–104 km2 are fully covered by the very same cloud-free acquisitions. The selection of suitable image material is, therefore, often easy and straightforward for smaller areas, but for large areas, additional pre-processing steps are needed to ensure a homogenous and gap-free set of features. Suitable techniques are, for example, compositing techniques [9], gap-filling procedures [10,11], or the use of descriptive temporal metrics [12].
Radar sensors, on the other hand, provide a more continuous data stream with however a lower signal-to-noise-ratio (SNR), as well as terrain- and observation-geometry-related artifacts [13,14]. The two Sentinel-1 satellites (S1), which possess high spatial resolution and high revisit frequency, are also provided free of charge by the Copernicus program and generate microwave images under almost all weather conditions. A number of studies have demonstrated a good potential for the differentiation of deciduous trees and conifers [14,15]. Rüetschi et al. [16] obtained an overall accuracy of 72% for a test site in Switzerland. Udali et al. [17] presented a forest type and tree species classification in a test area in southern Sweden using multitemporal S1 data with overall accuracies of 94% and 66%, respectively.
The combination of S1 and S2 data was used in several studies [7,15] for forest type and tree species classifications. Bjerreskov et al. [7] used a combination of multitemporal S1 and S2 data to classify nemoral forests in Denmark into broadleaf and coniferous forest types as well as into predefined tree species groups with overall accuracies of 95% and 63%, respectively. A combination of S1 and optical Landsat imagery was used for the classification of dominant tree species in broadleaf deciduous forests in Vietnam with an overall accuracy of 79% [18]. Systematic ablation studies are lacking investigating the potential of the two electromagnetic domains for the classification of a higher number (≥10) of tree species in forests with high diversity.
The objective of this paper is to study the benefits of combining S1 and S2 data for tree species classification in the mid-altitude forests of Austria. Various S1 and S2 data combinations are used to classify 12 different tree species with the help of Breiman’s random forest classifier [19] to evaluate the discriminative power of the two data streams:
(1)
a set of S1-derived parameters and 14 cloud-free S2 scenes were classified individually and in combination;
(2)
the monotemporal S2 scenes were classified separately as well as paired with S1 data;
(3)
the accuracies obtained from the monotemporal S2 scenes were used to determine the most- and least-accurate S2 scenes from spring, summer, and autumn seasons. Combinations of the least- and most-accurate seasonal S2 scenes were classified with and without S1 data.

2. Materials and Methods

2.1. Study Site, Reference Data

The Biosphere Reserve Wienerwald (BPWW) is located southwest of Vienna (Austria) and covers an area of approximately 105,000 hectares with a geographical extension of ca. 42 km × 47 km and an elevation between 162 and 893 m above sea level. The broadleaf-dominated forest is characterized by more than 20 forest communities [20]. Due to its diversity of tree species and its location within the overlapping area of two S2 orbits, it is particularly well suited for investigating remote sensing methods.
The study site and the reference data are shown in Figure 1. An existing dataset, created by using information from several forest enterprises such as forest inventories and stand-wise description of the forest management plans, was used as reference data [6]. Additional samples were added to better balance the different classes. While it was not always possible for sparsely represented tree species, a maximum of one pixel per forest stand was selected. The final dataset consisted of 1283 individual pixels, representing a total of 12 tree species—seven deciduous and five coniferous. Although not all tree species occurring in the BPWW were represented in the samples, they nevertheless provided a good overview of the main tree species prevailing in the park.

2.2. Sentinel-2 Data

All 14 completely cloud-free S2 scenes from the 2018 growing period (April to October) were selected (Table 1). Since Nkosi et al. [21] found band 9 to have a high capability in discriminating tree species, band 9 was left in the dataset, while bands 1 and 10 were excluded. The remaining eleven S2 spectral bands were resampled to a unique 10 m spatial resolution and corrected using the Sen2Cor atmospheric correction [22]. The resulting dataset was expanded by two biophysical vegetation variables calculated from visible and NIR spectral channels: FAPAR and the LAI [23,24]. In addition, 30 vegetation indices were calculated and added to the dataset (Table A1).

2.3. Sentinel-1 Data

In this study, 250 S1 ground-range-detected (GRD) interferometric wide (IW) swath mode acquisitions from the year 2018 were used. The pre-processed data were available via the Austrian Data Cube [25]. The pre-processing steps included precise orbit correction, border noise removal, radiometric correction to ß0 values, radiometric terrain flattening, range-Doppler terrain correction, and conversion to the decibel scale. A terrain model based on airborne laser scanning resampled onto a 10 m grid was used for the radiometric terrain flattening and the range-Doppler terrain correction steps. From the multitemporal S1 data, several parameters were computed. These included temporally averaged backscatter values for given time periods, phenological parameters, and harmonic regression model parameters.

2.3.1. Backscatter Averages and Ratios

For each repeat cycle of the S1 satellites (12 days), the temporal average of S1 backscatter was computed (Table 2). Two values per polarization (VV and VH)—one representing snow free, leaf-off conditions (14 to 26 March 2018) and one representing leaf-on conditions (18 to 30 June 2018) were selected. Furthermore, in case of the leaf-on period, the cross ratio (CPR) was computed as the backscatter ratio between VH and VV polarization [26,27], and in the case of VH polarization, the backscatter ratio between the leaf-on (18 to 30 June 2018) and leaf-off (14 to 26 March 2018) conditions was included.

2.3.2. Phenological Parameters

Deciduous forest classes show distinct backscatter behavior where the VH backscatter drops during the summer period by 1–2 dB when compared to the leaf-off period [16,28]. Several studies assumed that the drop in backscatter is connected to the leaf emergence, while the backscatter increase is caused by the leaf fall [16,28,29,30]. We applied the breakpoints algorithm described in Zeileis et al. [31] and successfully tested on the annual time series of the 12-day VH backscatter averages [16]. The computation was limited to pixels, where the average backscatter in the leaf-on period was lower than that in the leaf-off period (hence the ratio between the temporally averaged backscatter for leaf-on and leaf-off conditions was positive). The first breakpoint in the time series was assumed to represent the start of the season, while the second breakpoint represented the end of the season. The length of the season was computed as the difference between the two values. Start of season, end of season, and length of season were used in this study (Table 3).

2.3.3. Harmonic Parameters

Changes in the vegetation structure and environmental conditions cause temporal changes in radar backscatter. Especially in the case of vegetation, these changes typically have a strong seasonal character. Harmonic models (Equation (1)) can be used to describe this seasonal backscatter variation [32].
γ ^ t d a y 0 = γ 0 ¯ + i = 1 k C i cos 2 π i t d a y n + S i sin 2 π i t d a y n
The model estimates the most probable radar backscatter, γ ^ t d a y 0 , for a given day of the year, t d a y , from the average backscatter, γ 0 ¯ , for the given time period (year 2018 in case of this study) and the harmonic coefficients of the cosine and sine components, C i and S i . Harmonic coefficients and average backscatter are referred to as harmonic parameters (HPAR). As suggested, k is set to 3, representing the processes of a time scale of four months [32].
The HPARs (Table 4) are derived from a least-squares estimation based on the backscatter values and corresponding observation times of the input S1 time series. As opposed to Schlaffer et al. [32], we used the backscatter observations directly instead of using 10-day composites. Due to the strong dependency of the backscatter on the acquisition geometry, the parameter estimation was performed separately for each unique acquisition geometry (e.g., relative S1 orbit). As an additional parameter, the standard deviation of the residual error of the harmonic model was calculated as the square root of the sum of squared errors (SSE), divided by the number of data points (Npoints) adjusted for the degrees of freedom of the model (Equation (2)). The SSE was derived from the pixel’s backscatter time-series γ t ,   r 0 and its harmonic model γ ^ t ,   r 0 .
For this study, we computed HPARs for both VV and VH polarization and for one ascending (relative orbit number 73) and one descending (relative orbit number 22) orbit.
s = S S E σ t ,   r 0 ,   σ ^ t ,   r 0   N p o i n t s 2

2.4. Classification Approach

Different test cases were defined (Table 5), which included various band and parameter combinations of the two satellite systems.
Test case 1 to Test case 3 represent various combinations of the respective total data of the two satellite systems. While Test case 4 and Test case 5 were defined to evaluate the influence of additional S1 data on a monotemporal S2 scene, they were also necessary to identify the S2 scenes to be used in Test case 6 to Test case 9. Test cases 6–9 used a selection of the three scenes from the spring, summer, and autumn seasons, which had the highest/lowest total accuracy. Of these, classification models were created as the “most accurate scene of season” (MAS) and the “least accurate scene of season” (LAS) with and without the S1 features, respectively.
These 9 test cases (Table 5) were passed to Breiman’s [19] random forest (RF) algorithm, a widely used ensemble learning approach. To avoid overfitting, a recursive “mean decrease in accuracy” feature selection (MDA) was performed similarly to other studies [6,33,34,35]. The number of trees in the random forests was chosen with ntree = 1000, and for mtry (number of predictors randomly sampled for each node), the default value was used, that is the square root of available input variables. The accuracy assessment was done based on the out-of-bag-results (OOB) of the random forest models calculating common metrics based on confusion matrices.

3. Results

3.1. Full S1/S2 Dataset Comparison

In Table 6, the results of the model created exclusively with S1 data are shown (Test case 1). An overall accuracy (OA) of 55.7%, with a Cohen’s kappa of 0.469, was achieved. While good class-specific accuracies were achieved, especially for the conifers, the deciduous trees, apart from European beech (Fagus sylvatica FS), could only be separated with significantly lower accuracy. Furthermore, not a single sample of the two tree species maple (Acer spp. AC) and alder (Alnus glutinosa AG) could be assigned to the correct class.
If the sample classes were stratified in broadleaved (BL) and coniferous (CO) groups, the random forest model was able to separate them very well (BL = 96.5%, CO = 92.7%).
In the model built using only the S2 data (Test case 2), the overall accuracy was 83.2% with a Cohen’s kappa of 0.806. We also observed an increase in overall accuracy between the broadleaved and coniferous groups (BL = 99.4%, CO = 97.2%) compared to Test case 1, while a significant increase occurred in both the user’s and producer’s accuracies of all tree species. The two species that could not be classified using the S1 data were also relatively well separated (Table 7).
The results of Test case 3, combining all data, show a marginal 0.5-percentage-point improvement in overall accuracy with OA = 83.7 % and kappa = 0.811, compared to Test case 2 (Table 8). The within-coniferous-group accuracy slightly gained by 1.1 percentage points compared to that for the sole use of optical data (Test case 2). The results of the individual classes remained more or less constant.
To better understand the inputs, Figure 2 lists the 15 most important out of the 50 remaining variables after the MDA-Feature selection of Test case 3. The analysis reveals that, with “20180618_20180630_VH” and “20180618_20180630_CPR” (red), only two S1 parameters were among the 50 remaining variables of the model. Interestingly, they were in the first and third place of importance. Band 9 was not included in the remaining variables.

3.2. Added Value of S1 on Monotemporal S2 Datasets

To evaluate the contributions of S1 data in the (extreme) case of only one available S2 scene, the individual S2 scenes were each classified separately with (Test case 5) and without S1 data (Test case 4). Figure 3 compares the two variants for each of the 14 S2 acquisitions of Test case 4 and Test case 5, each with (blue) and without S1 data (green). Furthermore, as surrogates for compositing approaches, the three most-accurate (MAS) and the three least-accurate scenes (LAS) of each growing season are shown, which serve as S2 data material for Test case 6 to Test case 9. The last bar shows the result of using all available S2 data, with (blue) and without S1 data (green).
The monotemporal S2 results show an increase of the OA from spring to summer and again slightly lower values in the fall. Adding S1 data, the overall accuracy increased by around 5 percentage points but still fell far short of the accuracies achieved in Test case 2 (all S2 scenes without any S1; Figure 3 right, green). Each of the 14 monotemporal S2 scenes largely outperformed (by 5.2 to 14.3 percentage points) the full S1 dataset (horizontal red line in Figure 3).

3.3. Added Value of S1 on Multitemporal S2 Dataset

The two variants of seasonal S2 data (LAS vs. MAS) were separated by roughly 10 percentage points from each other (71.1% vs. 80.0%) with very minor improvements when the suite of S1 features were included (Figure 3). The LAS variant without S1 data, in which for each of the three seasons only the worst performing S2 image was retained (Test case 8), still performed better than the single best performing S2 scene (July image) and largely better (15.4%) compared to the full S1 dataset. The differences between S1 and S2 data were further accentuated (by almost 9%) when the three seasonal images were composed of the best performing individual S2 images (MAS; Test case 6). For this test case, classification results were almost on par with Test case 2 (all S2 scenes) and Test case 3 (all S1 and S2 data).
When comparing the sample-class F1-scores of Test cases 5 to Test case 9, shown in Figure 4a, it becomes apparent that already three S2 MASs were sufficient to eliminate the added value of S1 data on the classification result. If only three S2 LASs were available, there was a marginal improvement in the F1-scores of individual sample classes, but the results of already well-classified sample-classes were not further improved.
Nevertheless, it was not possible to reach the highest overall accuracy of Test case 3 by using only three individual S2 scenes, with and without additional S1 data.

4. Discussion

The classification based only on S1 data (Test case 1) did not achieve the same high accuracy values as the classification with either monotemporal and/or combined S2 image data. Nevertheless, it was already possible to separate the five coniferous species with a moderate degree of accuracy. This underlines the potential of S1 for separating different conifers. Furthermore, the separation of the two groups, deciduous and coniferous forests, was already at a very high level and exceeded the results from previous S1-based studies [14,15]. The increased accuracy could be related to the higher number of samples and/or the significantly smaller study area compared to the aforementioned studies. The larger number of derived S1 features could also play a role here, albeit comparisons across datasets are generally to be taken with caution.
The fact that the species within the conifer group were separated with satisfactory accuracies using microwave data—but not the deciduous species—is possibly related to the more distinct canopy surface roughness between conifers (as compared to the smoother deciduous species canopy surface). These results are in line with previous studies on tree species classification from S1 with slightly higher overall accuracies but fewer species (Table 9).
Test case 2, which was only based on S2 data, delivered significantly better results than Test case 1. This was expected as optical data reflect both structural and biochemical forest traits [46] and their temporal evolution, and S2 data are known to have a very high SNR and a good temporal coverage [47]. The differentiation of the two strata was again very good, but more importantly, now all species (deciduous and conifers) were separable. Separating the individual tree species yielded a satisfactory result with an OA of 83.2%, kappa of 0.806 and an improvement of 27.5 percentage points compared to the sole use of S1 (Test case 1). The result of Test case 2 could not reach the high accuracy of 88.7%, which was reached by using the original sample dataset and S2 scenes from several years [6]. However, the samples were increased in size and more balanced. Compared to other studies presented in Table 9, the OA of Test case 2 is within the range usually achieved with multitemporal S2 data.
The result of Test case 2 could be improved only marginally, 0.5 percentage points, by using additional S1 data (Test case 3). Including S1 data, however, we observed a shift in contributing features. Indeed, the MDA analysis revealed that Test case 3 included two very high ranking S1 parameters as input variables, which, therefore, had to replace at least two optical features from Test case 2. The high number of highly correlated features makes it very difficult to fully understand the MDA findings. After the MDA-Feature selection, band 9 never remained in the dataset. Therefore, the added value of band 9 in the discrimination of tree species depicted by Nkosi et al. [21] could not be confirmed.
For the study area, Wienerwald, and the employed RF classifier, it can be stated that if there are a sufficient number of S2 scenes available, no more added value is generated by using additional S1 data. We did, however, not evaluate the impact of the S2 orbit overlap on the classification accuracy, which certainly would negatively impact the obtained accuracies. On the other hand, temporal metrics (both parametric and using harmonics) in this study were only employed for S1 data.
The best monotemporal S2 result was achieved by the scenes from May and July, often a period with only few cloud-free data in Central Europe. Whenever data from several dates are available, the classification performance can be improved. Both variants of seasonal data (i.e., combining the three best and the three worst S2 scenes per season) demonstrated the added-value of multitemporal data.
Interestingly, in (optical) data-poor regions, the low S1 performance can be boosted significantly, if at least one S2 scene can be added as this adds valuable biochemical trait information. In this case, the OA of the classification can be increased by ca. 5.0 percentage points on average. This finding is significant in that S1 scenes, unlike S2 scenes, are always available due to the nature of their sensor’s active microwaves, but are only marginally performant when used alone. If, however, several S2 scenes are available, the added value of S1 is increasingly equalized and decreases to 1.7 percentage points with a composite of MAS-S2 scenes and to 1.5 percentage points with a composite of LAS-S2 scenes. This is in line with the findings of Mngadi et al. [40] and Waser et al. [15]. Even using seasonal S2-composites as in other studies [48], overall accuracies of the full multitemporal datasets as in Test case 2 and Test case 3 were far from achieved.

5. Conclusions

In this study, the performance of the Sentinel-1 and Sentinel-2 satellite pairs, both individually and in various combinations, is presented. Twelve tree species, seven deciduous and five coniferous, of the Austrian Biosphere Reserve Wienerwald were classified using Breiman’s random forest. While the results using only Sentinel-1 data were not satisfactory, the ability of the Sentinel-2 satellites to classify tree species was demonstrated once again. The greatest increase in accuracy can be achieved by using multitemporal Sentinel-2 data. In areas with insufficient coverage of optical satellites, Sentinel-1 can add value to classification accuracy. Seasonal Sentinel-2 composites have advantages over monotemporal classifications, but preference should be given to a full time series whenever possible. If sufficient Sentinel-2 data are available, the added value of Sentinel-1 data is only marginal, so that the effort to acquire data is offset by the added value of increased accuracy. However, for large-scale applications, the possibilities of acquiring cloud-free Sentinel-2 time series are often the limiting factor. In such cases, the advantage of the Sentinel-1 time series is obvious. The next steps would be further combinations with other datasets such as LiDAR or hyperspectral data (e.g., EnMAP). For a better understanding of the results and the relationship between vegetation structure and reflectance properties, radiative transfer models (RTMs) should be consulted.

Author Contributions

Conceptualization, M.I.; methodology, M.I. and M.L.; software, M.L. and M.I.; validation, A.D. and M.H.; formal analysis, M.L. and A.D.; investigation, M.L. and M.I.; resources, M.I.; data curation, M.L. and A.D.; writing—original draft preparation, M.L.; writing—review and editing, M.I., A.D., M.H. and C.A.; visualization, M.L. and M.I.; supervision, M.I.; project administration, M.I.; funding acquisition, M.I. All authors have read and agreed to the published version of the manuscript.

Funding

The study was partly funded by the Austrian Academy of Sciences (ÖAW), under the Earth System Science—Man and the Biosphere Programme (Project name: BRmon: Land cover classification and -monitoring of the Austrian biosphere reserves based on satellite data).

Data Availability Statement

Not applicable.

Acknowledgments

We thank our project partners: Biosphärenpark Wienerwald Management, Austrian federal forests (ÖBf AG), Forestry Office and Urban Agriculture of Vienna (MA 49), the forest enterprise of Klosterneuburg Abbey, and the forest enterprise of Heiligenkreuz Abbey for providing reference information. We are grateful to Alexandra Wieshaider (ÖBf AG) and Harald Brenner (Biosphärenpark Wienerwald Management) for supporting the project.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Summary of the additional vegetation indices used for the classification, together with the corresponding formula and references (band 8 was used for the NIR = Near-Infrared; RE = Red-Edge).
Table A1. Summary of the additional vegetation indices used for the classification, together with the corresponding formula and references (band 8 was used for the NIR = Near-Infrared; RE = Red-Edge).
Index-NameFormulaReference
Built-up Area Index (BAI) BLUE NIR BLUE + NIR [49]
Chlorophyll Green Index (CGI) NIR GREEN + RE 1 [50]
Greenness Index (GI) GREEN RED [51]
Green Normalized-Difference Vegetation Index (gNDVI) NIR GREEN NIR + GREEN [52]
Leaf Chlorophyll Content Index (LCCI) RE 3 RE 1 [53]
Moisture Stress Index (MSI) SWIR 1 NIR [54]
Normalized-Difference Red-Edge and SWIR2(NDRESWIR) RE 2 SWIR 2 RE 2 + SWIR 2 [55]
Normalized-Difference Tillage Index (NDTI) SWIR 1 SWIR 2 SWIR 1 + SWIR 2 [56]
Normalized-Difference Vegetation Index (NDVI) NIR RED NIR + RED [57]
Red-Edge Normalized-Difference Vegetation Index(reNDVI) NIR RE 1 NIR + RE 1 [52]
Normalized-Difference Water Index 1 (NDWI1) NIR SWIR 1 NIR + SWIR 1 [58]
Normalized-Difference Water Index 2 (NDWI2) NIR SWIR 2 NIR + SWIR 2 [52]
Normalized Humidity Index (NHI) SWIR 2 GREEN SWIR 2 + GREEN [59]
Red-Edge Peak Area (REPA) RED + RE 1 + RE 2 + RE 3 + NIR [55,60]
Red SWIR1 Difference (DIRESWIR) RED + SWIR 1 [61]
Red-Edge Triangular Vegetation Index (RETVI) 100 NIR RE 1 10 NIR GREEN [62]
Soil Adjusted Vegetation Index (SAVI) NIR RED NIR + GREEN + 0.5 1.5 [63]
Blue and RE1 Ratio (SRBRE1) BLUE RE 1 [51]
Blue and RE2 Ratio (SRBRE2) BLUE RE 2 [64]
Blue and RE3 Ratio (SRBRE3) BLUE RE 3 [55]
NIR and Blue Ratio (SRNIRB) NIR BLUE [65]
NIR and Green Ratio (SRNIRG) NIR GREEN [51]
NIR and Red Ratio (SRNIRR) NIR RED [65]
NIR and RE1 Ratio (SRNIRRE1) NIR RE 1 [50]
NIR and RE2 Ratio (SRNIRRE2) NIR RE 2 [55]
NIR and RE3 Ratio (SRNIRRE3) NIR RE 3 [55]
Soil Tillage Index (STI) SWIR 1 SWIR 2 [56]
Water Body Index (WBI) BLUE RED BLUE + RED [66]

References

  1. Secretariat of the Convention on Biological Diversity. Global Biodiversity Outlook 5; Secretariat of the Convention on Biological Diversity: Montreal, Canada, 2020.
  2. Breidenbach, J.; Waser, L.T.; Debella-Gilo, M.; Schumacher, J.; Rahlf, J.; Hauglin, M.; Puliti, S.; Astrup, R. National Mapping and Estimation of Forest Area by Dominant Tree Species Using Sentinel-2 Data. Can. J. For. Res. 2021, 51, 365–379. [Google Scholar] [CrossRef]
  3. Immitzer, M.; Vuolo, F.; Atzberger, C. First Experience with Sentinel-2 Data for Crop and Tree Species Classifications in Central Europe. Remote Sens. 2016, 8, 166. [Google Scholar] [CrossRef]
  4. Maschler, J.; Atzberger, C.; Immitzer, M. Individual Tree Crown Segmentation and Classification of 13 Tree Species Using Airborne Hyperspectral Data. Remote Sens. 2018, 10, 1218. [Google Scholar] [CrossRef] [Green Version]
  5. Grybas, H.; Congalton, R.G. A Comparison of Multi-Temporal RGB and Multispectral UAS Imagery for Tree Species Classification in Heterogeneous New Hampshire Forests. Remote Sens. 2021, 13, 2631. [Google Scholar] [CrossRef]
  6. Immitzer, M.; Neuwirth, M.; Böck, S.; Brenner, H.; Vuolo, F.; Atzberger, C. Optimal Input Features for Tree Species Classification in Central Europe Based on Multi-Temporal Sentinel-2 Data. Remote Sens. 2019, 11, 2599. [Google Scholar] [CrossRef] [Green Version]
  7. Bjerreskov, K.S.; Nord-Larsen, T.; Fensholt, R. Classification of Nemoral Forests with Fusion of Multi-Temporal Sentinel-1 and 2 Data. Remote Sens. 2021, 13, 950. [Google Scholar] [CrossRef]
  8. Hościło, A.; Lewandowska, A. Mapping Forest Type and Tree Species on a Regional Scale Using Multi-Temporal Sentinel-2 Data. Remote Sens. 2019, 11, 929. [Google Scholar] [CrossRef] [Green Version]
  9. Kohrs, R.A.; Lazzara, M.A.; Robaidek, J.O.; Santek, D.A.; Knuth, S.L. Global Satellite Composites—20 Years of Evolution. Atmospheric Res. 2014, 135–136, 8–34. [Google Scholar] [CrossRef] [Green Version]
  10. Vuolo, F.; Ng, W.-T.; Atzberger, C. Smoothing and Gap-Filling of High Resolution Multi-Spectral Time Series: Example of Landsat Data. Int. J. Appl. Earth Obs. Geoinf. 2017, 57, 202–213. [Google Scholar] [CrossRef]
  11. Moreno-Martínez, Á.; Izquierdo-Verdiguier, E.; Maneta, M.P.; Camps-Valls, G.; Robinson, N.; Muñoz-Marí, J.; Sedano, F.; Clinton, N.; Running, S.W. Multispectral High Resolution Sensor Fusion for Smoothing and Gap-Filling in the Cloud. Remote Sens. Environ. 2020, 247, 111901. [Google Scholar] [CrossRef]
  12. Griffiths, P.; Nendel, C.; Hostert, P. Intra-Annual Reflectance Composites from Sentinel-2 and Landsat for National-Scale Crop and Land Cover Mapping. Remote Sens. Environ. 2019, 220, 135–151. [Google Scholar] [CrossRef]
  13. Bauer-Marschallinger, B.; Cao, S.; Navacchi, C.; Freeman, V.; Reuß, F.; Geudtner, D.; Rommen, B.; Vega, F.C.; Snoeij, P.; Attema, E.; et al. The Normalised Sentinel-1 Global Backscatter Model, Mapping Earth’s Land Surface with C-Band Microwaves. Sci. Data 2021, 8, 277. [Google Scholar] [CrossRef] [PubMed]
  14. Dostálová, A.; Lang, M.; Ivanovs, J.; Waser, L.T.; Wagner, W. European Wide Forest Classification Based on Sentinel-1 Data. Remote Sens. 2021, 13, 337. [Google Scholar] [CrossRef]
  15. Waser, L.T.; Rüetschi, M.; Psomas, A.; Small, D.; Rehush, N. Mapping Dominant Leaf Type Based on Combined Sentinel-1/-2 Data—Challenges for Mountainous Countries. ISPRS J. Photogramm. Remote Sens. 2021, 180, 209–226. [Google Scholar] [CrossRef]
  16. Rüetschi, M.; Schaepman, M.; Small, D. Using Multitemporal Sentinel-1 C-Band Backscatter to Monitor Phenology and Classify Deciduous and Coniferous Forests in Northern Switzerland. Remote Sens. 2017, 10, 55. [Google Scholar] [CrossRef] [Green Version]
  17. Udali, A.; Lingua, E.; Persson, H.J. Assessing Forest Type and Tree Species Classification Using Sentinel-1 C-Band SAR Data in Southern Sweden. Remote Sens. 2021, 13, 3237. [Google Scholar] [CrossRef]
  18. Tran, A.T.; Nguyen, K.A.; Liou, Y.A.; Le, M.H.; Vu, V.T.; Nguyen, D.D. Classification and Observed Seasonal Phenology of Broadleaf Deciduous Forests in a Tropical Region by Using Multitemporal Sentinel-1A and Landsat 8 Data. Forests 2021, 12, 235. [Google Scholar] [CrossRef]
  19. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  20. Mrkvicka, A.; Drozdowski, I.; Brenner, H. Kernzonen im Biosphärenpark Wienerwald—Urwälder von morgen. Wiss. Mitt. Aus Niederösterreichischen Landesmus. 2014, 25, 41–88. [Google Scholar]
  21. Nkosi, S.E.; Adam, E.; Barrett, A.S.; Brown, L.R. Mapping the Spatial Distribution of Tree Species Selected by Elephants (Loxodonta Africana) in Venetia-Limpopo Nature Reserve Using Sentinel-2 Imagery. Appl. Geomat. 2021, 13, 701–713. [Google Scholar] [CrossRef]
  22. Louis, J.; Debaecker, V.; Pflug, B.; Main-Knorn, M.; Bieniarz, J.; Mueller-Wilm, U.; Cadau, E.; Gascon, F. Sentinel-2 Sen2Cor: L2A Processor for Users. In Proceedings of the Living Planet Symposium 2016, Prague, Czech Republic, 9–13 May 2016; Spacebooks Online. 2016; pp. 1–8. [Google Scholar]
  23. Myneni, R.B.; Nemani, R.R.; Shabanov, N.V.; Knyazikhin, Y.; Morisette, J.T.; Privette, J.L.; Running, S.W. LAI and FPAR. Available online: https://cce.nasa.gov/mtg2008_ab_presentations/LAI-FPAR_Myneni_whitepaper.pdf (accessed on 4 May 2022).
  24. Watson, D.J. Comparative Physiological Studies on the Growth of Field Crops: I. Variation in Net Assimilation Rate and Leaf Area between Species and Varieties, and within and between Years. Ann. Bot. 1947, 11, 41–76. [Google Scholar] [CrossRef]
  25. EODC GmbH Austrian Data Cube. Available online: https://acube.eodc.eu/ (accessed on 1 April 2022).
  26. Le Toan, T.; Beaudoin, A.; Riom, J.; Guyon, D. Relating Forest Biomass to SAR Data. IEEE Trans. Geosci. Remote Sens. 1992, 30, 403–411. [Google Scholar] [CrossRef]
  27. Vreugdenhil, M.; Wagner, W.; Bauer-Marschallinger, B.; Pfeil, I.; Teubner, I.; Rüdiger, C.; Strauss, P. Sensitivity of Sentinel-1 Backscatter to Vegetation Dynamics: An Austrian Case Study. Remote Sens. 2018, 10, 1396. [Google Scholar] [CrossRef] [Green Version]
  28. Dostálová, A.; Wagner, W.; Milenković, M.; Hollaus, M. Annual Seasonality in Sentinel-1 Signal for Forest Mapping and Forest Type Classification. Int. J. Remote Sens. 2018, 39, 7738–7760. [Google Scholar] [CrossRef]
  29. Frison, P.-L.; Fruneau, B.; Kmiha, S.; Soudani, K.; Dufrêne, E.; Toan, T.L.; Koleck, T.; Villard, L.; Mougin, E.; Rudant, J.-P. Potential of Sentinel-1 Data for Monitoring Temperate Mixed Forest Phenology. Remote Sens. 2018, 10, 2049. [Google Scholar] [CrossRef] [Green Version]
  30. Ahern, F.J.; Leckie, D.J.; Drieman, J.A. Seasonal Changes in Relative C-Band Backscatter of Northern Forest Cover Types. IEEE Trans. Geosci. Remote Sens. 1993, 31, 668–680. [Google Scholar] [CrossRef]
  31. Zeileis, A.; Leisch, F.; Hornik, K.; Kleiber, C. Strucchange: An R Package for Testing for Structural Change in Linear Regression Models. J. Stat. Softw. 2002, 7, 1–38. [Google Scholar] [CrossRef] [Green Version]
  32. Schlaffer, S.; Matgen, P.; Hollaus, M.; Wagner, W. Flood Detection from Multi-Temporal SAR Data Using Harmonic Analysis and Change Detection. Int. J. Appl. Earth Obs. Geoinf. 2015, 38, 15–24. [Google Scholar] [CrossRef]
  33. Einzmann, K.; Immitzer, M.; Böck, S.; Bauer, O.; Schmitt, A.; Atzberger, C. Windthrow Detection in European Forests with Very High-Resolution Optical Data. Forests 2017, 8, 21. [Google Scholar] [CrossRef] [Green Version]
  34. Immitzer, M.; Böck, S.; Einzmann, K.; Vuolo, F.; Pinnel, N.; Wallner, A.; Atzberger, C. Fractional Cover Mapping of Spruce and Pine at 1 Ha Resolution Combining Very High and Medium Spatial Resolution Satellite Imagery. Remote Sens. Environ. 2018, 204, 690–703. [Google Scholar] [CrossRef] [Green Version]
  35. Guyon, I.; Weston, J.; Barnhill, S. Gene Selection for Cancer Classification Using Support Vector Machines. Mach. Learn. 2002, 46, 389–422. [Google Scholar] [CrossRef]
  36. Wessel, M.; Brandmeier, M.; Tiede, D. Evaluation of Different Machine Learning Algorithms for Scalable Classification of Tree Types and Tree Species Based on Sentinel-2 Data. Remote Sens. 2018, 10, 1419. [Google Scholar] [CrossRef] [Green Version]
  37. Ma, M.; Liu, J.; Liu, M.; Zeng, J.; Li, Y. Tree Species Classification Based on Sentinel-2 Imagery and Random Forest Classifier in the Eastern Regions of the Qilian Mountains. Forests 2021, 12, 1736. [Google Scholar] [CrossRef]
  38. Kollert, A.; Bremer, M.; Löw, M.; Rutzinger, M. Exploring the Potential of Land Surface Phenology and Seasonal Cloud Free Composites of One Year of Sentinel-2 Imagery for Tree Species Mapping in a Mountainous Region. Int. J. Appl. Earth Obs. Geoinf. 2021, 94, 102208. [Google Scholar] [CrossRef]
  39. Persson, M.; Lindberg, E.; Reese, H. Tree Species Classification with Multi-Temporal Sentinel-2 Data. Remote Sens. 2018, 10, 1794. [Google Scholar] [CrossRef] [Green Version]
  40. Mngadi, M.; Odindi, J.; Peerbhay, K.; Mutanga, O. Examining the Effectiveness of Sentinel-1 and 2 Imagery for Commercial Forest Species Mapping. Geocarto Int. 2021, 36, 1–12. [Google Scholar] [CrossRef]
  41. Grabska, E.; Hostert, P.; Pflugmacher, D.; Ostapowicz, K. Forest Stand Species Mapping Using the Sentinel-2 Time Series. Remote Sens. 2019, 11, 1197. [Google Scholar] [CrossRef] [Green Version]
  42. Grabska, E.; Frantz, D.; Ostapowicz, K. Evaluation of Machine Learning Algorithms for Forest Stand Species Mapping Using Sentinel-2 Imagery and Environmental Data in the Polish Carpathians. Remote Sens. Environ. 2020, 251, 112103. [Google Scholar] [CrossRef]
  43. Karasiak, N.; Fauvel, M.; Dejoux, J.-F.; Monteil, C.; Sheeren, D. Optimal Dates for Deciduous Tree Species Mapping Using Full Years Sentinel-2 Time Series in South West France. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, V-3-2020, 469–476. [Google Scholar] [CrossRef]
  44. Hemmerling, J.; Pflugmacher, D.; Hostert, P. Mapping Temperate Forest Tree Species Using Dense Sentinel-2 Time Series. Remote Sens. Environ. 2021, 267, 112743. [Google Scholar] [CrossRef]
  45. Xie, B.; Cao, C.; Xu, M.; Duerler, R.S.; Yang, X.; Bashir, B.; Chen, Y.; Wang, K. Analysis of Regional Distribution of Tree Species Using Multi-Seasonal Sentinel-1&2 Imagery within Google Earth Engine. Forests 2021, 12, 565. [Google Scholar] [CrossRef]
  46. Schlerf, M.; Atzberger, C. Inversion of a Forest Reflectance Model to Estimate Structural Canopy Variables from Hyperspectral Remote Sensing Data. Remote Sens. Environ. 2006, 100, 281–294. [Google Scholar] [CrossRef]
  47. ESA Sentinel-2 L1C Data Quality Report. Available online: https://sentinel.esa.int/documents/247904/685211/Sentinel-2_L1C_Data_Quality_Report (accessed on 2 May 2022).
  48. Ghassemi, B.; Dujakovic, A.; Żółtak, M.; Immitzer, M.; Atzberger, C.; Vuolo, F. Designing a European-Wide Crop Type Mapping Approach Based on Machine Learning Algorithms Using LUCAS Field Survey and Sentinel-2 Data. Remote Sens. 2022, 14, 541. [Google Scholar] [CrossRef]
  49. Shahi, K.; Shafri, H.Z.M.; Taherzadeh, E.; Mansor, S.; Muniandy, R. A Novel Spectral Index to Automatically Extract Road Networks from WorldView-2 Satellite Imagery. Egypt. J. Remote Sens. Space Sci. 2015, 18, 27–33. [Google Scholar] [CrossRef] [Green Version]
  50. Datt, B. A New Reflectance Index for Remote Sensing of Chlorophyll Content in Higher Plants: Tests Using Eucalyptus Leaves. J. Plant Physiol. 1999, 154, 30–36. [Google Scholar] [CrossRef]
  51. Le Maire, G.; François, C.; Dufrêne, E. Towards Universal Broad Leaf Chlorophyll Indices Using PROSPECT Simulated Database and Hyperspectral Reflectance Measurements. Remote Sens. Environ. 2004, 89, 1–28. [Google Scholar] [CrossRef]
  52. Gitelson, A.A.; Kaufman, Y.J.; Merzlyak, M.N. Use of a Green Channel in Remote Sensing of Global Vegetation from EOS-MODIS. Remote Sens. Environ. 1996, 58, 289–298. [Google Scholar] [CrossRef]
  53. Wulf, H.; Stuhler, S. Sentinel-2: Land Cover, Preliminary User Feedback on Sentinel-2a Data. In Proceedings of the Sentinel-2A Expert Users Technical Meeting, Frascati, Italy, 27 February–1 March 2015; pp. 29–30. [Google Scholar]
  54. Vogelmann, J.E.; Rock, B.N. Spectral Characterization of Suspected Acid Deposition Damage in Red Spruce (Picea Rubens) Stands from Vermont. In Proceedings of the Airborne Imaging Spectrometer Data Anal. Workshop, Pasadena, CA, USA, 8–10 April 1985. [Google Scholar]
  55. Radoux, J.; Chomé, G.; Jacques, D.; Waldner, F.; Bellemans, N.; Matton, N.; Lamarche, C.; d’Andrimont, R.; Defourny, P. Sentinel-2’s Potential for Sub-Pixel Landscape Feature Detection. Remote Sens. 2016, 8, 488. [Google Scholar] [CrossRef] [Green Version]
  56. Van Deventer, A.P.; Ward, A.D.; Gowda, P.H.; Lyon, J.G. Using Thematic Mapper Data to Identify Contrasting Soil Plains and Tillage Practices. Photogramm. Eng. Remote Sens. 1997, 63, 87–93. [Google Scholar]
  57. Tucker, C.J. Red and Photographic Infrared Linear Combinations for Monitoring Vegetation. Remote Sens. Environ. 1979, 8, 127–150. [Google Scholar] [CrossRef] [Green Version]
  58. Gao, B. NDWI—A Normalized Difference Water Index for Remote Sensing of Vegetation Liquid Water from Space. Remote Sens. Environ. 1996, 58, 257–266. [Google Scholar] [CrossRef]
  59. Lacaux, J.P.; Tourre, Y.M.; Vignolles, C.; Ndione, J.A.; Lafaye, M. Classification of Ponds from High-Spatial Resolution Remote Sensing: Application to Rift Valley Fever Epidemics in Senegal. Remote Sens. Environ. 2007, 106, 66–74. [Google Scholar] [CrossRef]
  60. Filella, I.; Penuelas, J. The Red Edge Position and Shape as Indicators of Plant Chlorophyll Content, Biomass and Hydric Status. Int. J. Remote Sens. 1994, 15, 1459–1470. [Google Scholar] [CrossRef]
  61. Jacques, D.C.; Kergoat, L.; Hiernaux, P.; Mougin, E.; Defourny, P. Monitoring Dry Vegetation Masses in Semi-Arid Areas with MODIS SWIR Bands. Remote Sens. Environ. 2014, 153, 40–49. [Google Scholar] [CrossRef]
  62. Chen, P.-F.; Nicolas, T.; Wang, J.-H.; Philippe, V.; Huang, W.-J.; Li, B.-G. New Index for Crop Canopy Fresh Biomass Estimation. Spectrosc. Spectr. Anal. 2010, 30, 512–517. [Google Scholar]
  63. Huete, A.R. A Soil-Adjusted Vegetation Index (SAVI). Remote Sens. Environ. 1988, 25, 295–309. [Google Scholar] [CrossRef]
  64. Lichtenthaler, H.K.; Lang, M.; Sowinska, M.; Heisel, F.; Miehé, J.A. Detection of Vegetation Stress Via a New High Resolution Fluorescence Imaging System. J. Plant Physiol. 1996, 148, 599–612. [Google Scholar] [CrossRef]
  65. Blackburn, G.A. Quantifying Chlorophylls and Caroteniods at Leaf and Canopy Scales. Remote Sens. Environ. 1998, 66, 273–285. [Google Scholar] [CrossRef]
  66. Domenech, E.; Mallet, C. Change Detection in High Resolution Land Use/Land Cover Geodatabases (at Object Level); Official Publication No. 64; EuroSDR: Leuven, Belgium, 2014. [Google Scholar]
Figure 1. Overview of the Biosphere Reserve Wienerwald in the southwestern area of Vienna as well as the 12 classes of tree species reference samples. Background: Color Infrared composite of Sentinel-2.
Figure 1. Overview of the Biosphere Reserve Wienerwald in the southwestern area of Vienna as well as the 12 classes of tree species reference samples. Background: Color Infrared composite of Sentinel-2.
Remotesensing 14 02687 g001
Figure 2. Importance plot of the remaining variables of Test case 3. The two Sentinel-1 parameters 20180618_20180630_VH and 20180618_20180630_CPR (in red letters) remain in the classification model.
Figure 2. Importance plot of the remaining variables of Test case 3. The two Sentinel-1 parameters 20180618_20180630_VH and 20180618_20180630_CPR (in red letters) remain in the classification model.
Remotesensing 14 02687 g002
Figure 3. Overall accuracy of models using monotemporal Sentinel-2 (S2) scenes as well as two seasonal selections using the three least- (LAS) and the three most-performing S2 scenes (MAS), with and without Sentinel-1 (S1) combination. Also shown are the results when using all available S2 scenes with and without S1 combination. The red horizontal line highlights the results of the model based on S1 data only.
Figure 3. Overall accuracy of models using monotemporal Sentinel-2 (S2) scenes as well as two seasonal selections using the three least- (LAS) and the three most-performing S2 scenes (MAS), with and without Sentinel-1 (S1) combination. Also shown are the results when using all available S2 scenes with and without S1 combination. The red horizontal line highlights the results of the model based on S1 data only.
Remotesensing 14 02687 g003
Figure 4. F1-scores of Test cases 1 to Test case 3 (a), F1-scores of all expressions of Test cases 6 to Test case 9 (b); for the abbreviations of tree species names see Figure 1, and definitions of test cases are provided in Table 5.
Figure 4. F1-scores of Test cases 1 to Test case 3 (a), F1-scores of all expressions of Test cases 6 to Test case 9 (b); for the abbreviations of tree species names see Figure 1, and definitions of test cases are provided in Table 5.
Remotesensing 14 02687 g004
Table 1. Summary of the Sentinel-2 (S2) scenes of the 2018 vegetation period used for classification.
Table 1. Summary of the Sentinel-2 (S2) scenes of the 2018 vegetation period used for classification.
S2 SatelliteDateOrbitSun Zenith AngleSun Azimuth Angle
B8 April 20187943.02157.29
B21 April 201812237.72160.39
A6 May 201812233.06159.37
A2 July 20187928.23147.73
B9 August 201812234.38155.97
A21 August 20187938.49154.81
B29 August 201812240.40160.44
A13 September 201812245.59163.93
B18 September 201812247.41164.98
B28 September 201812251.10166.96
A30 September 20187952.25164.21
B5 October 20187954.08165.12
A10 October 20187955.90165.94
A30 October 20187962.82168.14
Table 2. Summary of the Sentinel-1 temporal average backscatter parameters of the 2018 vegetation period used for classification.
Table 2. Summary of the Sentinel-1 temporal average backscatter parameters of the 2018 vegetation period used for classification.
Temporal Average BackscatterLeaf-OffLeaf-On
VH20180314_20180326_VH20180618_20180630_VH
VV20180314_20180326_VV20180618_20180630_VV
VH/VV 20180618_20180630_CPR
Table 3. Summary of the Sentinel-1 phenological parameters of the 2018 vegetation period used for classification.
Table 3. Summary of the Sentinel-1 phenological parameters of the 2018 vegetation period used for classification.
Backscatter ratioVH or VV
Leaf-on/Leaf-offRat_Leaf_on_off
PhenologyVH or VV
Start of season—day of the yearsos_doy
End of season—day of the yeareos_doy
Length of season—dayssos_doy
correlation_winter
slope_winter
Table 4. Summary of the Sentinel-1 harmonic parameters of the 2018 vegetation period used for classification.
Table 4. Summary of the Sentinel-1 harmonic parameters of the 2018 vegetation period used for classification.
Ascending (Orbit 73)Descending (Orbit 22)
HPAR
Cosine 1VHHPAR-C1_2018_VH_A073HPAR-C1_2018_VH_D022
VVHPAR-C1_2018_VV_A073HPAR-C1_2018_VV_D022
Cosine 2VHHPAR-C2_2018_VH_A073HPAR-C2_2018_VH_D022
VVHPAR-C2_2018_VV_A073HPAR-C2_2018_VV_D022
Cosine 3VHHPAR-C3_2018_VH_A073HPAR-C3_2018_VH_D022
VVHPAR-C3_2018_VV_A073HPAR-C3_2018_VV_D022
Sine 1VHHPAR-S1_2018_VH_A073HPAR-S1_2018_VH_D022
VVHPAR-S1_2018_VV_A073HPAR-S1_2018_VV_D022
Sine 2VHHPAR-S2_2018_VH_A073HPAR-S2_2018_VH_D022
VVHPAR-S2_2018_VV_A073HPAR-S2_2018_VV_D022
Sine 3VHHPAR-S3_2018_VH_A073HPAR-S3_2018_VH_D022
VVHPAR-S3_2018_VV_A073HPAR-S3_2018_VV_D022
HPAR temporal average
VHHPAR-M0_2018_VH_A073HPAR-M0_2018_VH_D022
VVHPAR-M0_2018_VV_A073HPAR-M0_2018_VV_D022
HPAR model error
VHHPAR-STD_2018_VH_A073HPAR-STD_2018_VH_D022
VVHPAR-STD_2018_VV_A073HPAR-STD_2018_VV_D022
Table 5. Scenario of the nine test cases evaluated using different combinations of Sentinel-1 (S1) and Sentinel-2 (S2) data.
Table 5. Scenario of the nine test cases evaluated using different combinations of Sentinel-1 (S1) and Sentinel-2 (S2) data.
Test CaseAcronymFeaturesComment
1S143multitemporal S1 parameters
2S2 (MULTI)574multitemporal image data of S2
3S1 + S2 (MULTI)617multitemporal S1 parameters + multitemporal image data of S2
4S2 (MONO)41monotemporal image data of S2
5S1 + S2 (MONO)84multitemporal S1 parameters + monotemporal image data of S2
6S2 (MAS)123image data of Most Accurate S2 Scene of each growing season
7S1 + S2 (MAS)166multitemporal S1 parameters + image data of Most Accurate S2 Scene of each growing season
8S2 (LAS)123image data of Least Accurate S2 Scene of each growing season
9S1 + S2 (LAS)166multitemporal S1 parameters + image data of Least Accurate S2 Scene of each growing season
Table 6. Confusion matrix based on the out-of-bag-result of Test case 1, using only Sentinel-1 data. UA: user’s accuracy, PA: producer’s accuracy, OA: overall accuracy; for the abbreviations of tree species names see Figure 1.
Table 6. Confusion matrix based on the out-of-bag-result of Test case 1, using only Sentinel-1 data. UA: user’s accuracy, PA: producer’s accuracy, OA: overall accuracy; for the abbreviations of tree species names see Figure 1.
Reference
FSAGFEQUPRCPACPAPNPSLDPMUAF1-Score
Sentinel-1 − Test case 1ClassificationFS236333686163934010414046.5%0.584
AG000101000001NANA
FE32621140000031.6%0.113
QU471440140218130410050.2%0.550
PR10001000000050.0%0.074
CP00200401000150.0%0.101
AC000000000000NANA
PA10000201192702476.8%0.821
PN72210521114191571.7%0.755
PS0000100511441565.7%0.603
LD511041210231163.3%0.626
PM000000082231955.9%0.422
∑ Reference3005287230257155135143795056
PA78.7%0.0%6.9%60.9%4.0%5.6%0.0%88.1%79.7%55.7%62.0%33.9%
OA 55.7% Kappa 0.469
Table 7. Confusion matrix based on the out-of-bag-result of Test case 2, using only Sentinel-2 data. UA: user’s accuracy, PA: producer’s accuracy, OA: overall accuracy; for the abbreviations of tree species names see Figure 1.
Table 7. Confusion matrix based on the out-of-bag-result of Test case 2, using only Sentinel-2 data. UA: user’s accuracy, PA: producer’s accuracy, OA: overall accuracy; for the abbreviations of tree species names see Figure 1.
Reference
FSAGFEQUPRCPACPAPNPSLDPMUAF1-Score
Sentinel-2 − Test case 2ClassificationFS27241026522150206075.1%0.822
AG142311100000085.7%0.832
FE626452430202071.1%0.723
QU13171955440001084.8%0.848
PR0000120000000100.0%0.649
CP321104010000083.3%0.672
AC510100300000081.1%0.652
PA0000000127141393.4%0.937
PN0000002313311194.3%0.937
PS000000033723583.7%0.873
LD002100002236280.0%0.758
PM000000020004595.7%0.874
∑ Reference3005287230257155135143795056
PA90.7%80.8%73.6%84.8%48.0%56.3%54.5%94.1%93.0%91.1%72.0%80.4%
OA 83.2% Kappa 0.806
Table 8. Confusion matrix based on the out-of-bag-result of Test case 3, using Sentinel-1 data and Sentinel-2 data. UA: user’s accuracy, PA: producer’s accuracy, OA: overall accuracy; for the abbreviations of tree species names see Figure 1.
Table 8. Confusion matrix based on the out-of-bag-result of Test case 3, using Sentinel-1 data and Sentinel-2 data. UA: user’s accuracy, PA: producer’s accuracy, OA: overall accuracy; for the abbreviations of tree species names see Figure 1.
Reference
FSAGFEQUPRCPACPAPNPSLDPMUAF1-Score
Sentinel-1 + Sentinel-2 − Test case 3ClassificationFS2712928819160103075.9%0.825
AG043100100000095.6%0.887
FE626460340102072.7%0.731
QU16191936430000083.2%0.835
PR0000100000000100%0.571
CP233204410100078.6%0.693
AC410010290000082.9%0.644
PA0000000129141592.1%0.938
PN1000002313522093.1%0.938
PS000000023702387.5%0.881
LD001100001340283.3%0.816
PM000000010004697.9%0.893
∑ Reference3005287230257155135143795056
PA90.3%82.7%73.6%83.9%40.0%62.0%52.7%95.6%94.4%88.6%80.0%82.1%
OA 83.7% Kappa 0.811
Table 9. Summary of previous studies using Sentinel-1 (S1), Sentinel-2 (S2), and both combined for tree species classification and their achieved overall accuracies (OAs).
Table 9. Summary of previous studies using Sentinel-1 (S1), Sentinel-2 (S2), and both combined for tree species classification and their achieved overall accuracies (OAs).
SatelliteNo. of SpeciesSpecies NamesOAReference
S13Quercus spp., Fagus sylvatica, Picea abies72%[16]
S14Quercus robur, Betula spp., Picea abies, Pinus sylvestris66%[17]
S112Fagus sylvatica, Alnus glutinosa, Fraxinus excelsior, Quercus spp., Prunus spp., Carpinus betulus, Acer spp., Picea abies, Pinus nigra, Pinus sylvestris, Larix decidua, Pseudotsuga menziesii58%Table 6
S24Fagus sylvatica, Quercus spp., other broadleaf trees, coniferous trees88%[36]
S24Sabina przewalskii, Picea crassifolia, Betula spp., Populus spp.90%[37]
S25Larix spp., Pinus spp., Pinus mugo, Abies alba/Picea abies, broadleaf trees84%[38]
S25Picea abies, Pinus silvestris, Larix × marschlinsii, Betula sp., Quercus robur88%[39]
S27Picea sp., Pinus sp., Larix sp., Abies sp., Fagus sp., Quercus sp., other broadleaf trees66%[3]
S27Acacia mearnsii, Eucalyptus dunnii, Eucalyptus grandiis, Eucalyptus mix, Pinus tecunumanii, Pinus elliotii, Pinus taedea84%[40]
S28Fagus sylvatica, Quercus spp., Alnus spp., Betula pendula, Picea abies, Pinus sylvestris, Abies alba, Larix decidua82%[8]
S29Fagus sylvatica, Betula pendula, Carpinus betulus, Abies alba, Acer pseudoplatanus, Larix decidua, European larch, Alnus incana, Pinus sylvestris, Picea abies92%[41]
S211Alnus spp., Acer pseudoplatanus, Fagus sylvatica, Betula pendula, Carpinus betulus, Quercus spp., Picea abies, Pinus sylvestris, Larix decidua, Pseudotsuga menziesii87%[42]
S212Fagus sylvatica, Alnus glutinosa, Fraxinus excelsior, Quercus spp., Prunus spp., Carpinus betulus, Acer spp., Picea abies, Pinus nigra, Pinus sylvestris, Larix decidua, Pseudotsuga menziesii83%Table 7
S212Fagus sylvatica, Alnus glutinosa, Fraxinus excelsior, Quercus spp., Prunus spp., Carpinus betulus, Acer spp., Picea abies, Pinus nigra, Pinus sylvestris, Larix decidua, Pseudotsuga menziesii90%[6]
S212Betula pendula, Quercus robur/pubescens/petraea, Quercus rubra, Populus spp., Fraxinus excelsior, Robinia pseudoacacia, Salix spp., Eucalyptus spp., Pinus nigra subsp. laricio, Pinus pinaster, Pinus nigra, Abies alba, Pseudotsuga menziesii, Cupressus spp.0.90 *[43]
S217Fagus sylvatica, Alnus spp., Quercus petraea/robur, Quercus rubra, Betula pendula, Robinia pseudoacacia, Tilia cordata, Acer pseudoplatanus, Fraxinus excelsior, Populus spp., Carpinus betulus, Picea abies, Larix spp., Pseudotsuga menziesii, Pinus sylvestris, Pinus strobus, Pinus nigra96% **[44]
S1 + LS4Shorea siamensis, Shorea obtuse, Dipterocarpus tuberculatus, semi-evergreen/evergreen79%[18]
S1 + S26Fagus sylvatica, Quercus spp., other broadleaves, Picea sp., Pinus sp., other conifers63%[7]
S1 + S26Quercus mongolia, Betula spp., Populus spp., Armeniaca sibirica Larix spp., Pinus tabulaeformis78%[45]
S1 + S27Acacia mearnsii, Eucalyptus dunnii, Eucalyptus grandiis, Eucalyptus mix, Pinus tecunumanii, Pinus elliotii, Pinus taedea88%[40]
S1 + S212Fagus sylvatica, Alnus glutinosa, Fraxinus excelsior, Quercus spp., Prunus spp., Carpinus betulus, Acer spp., Picea abies, Pinus nigra, Pinus sylvestris, Larix decidua, Pseudotsuga menziesii84%Table 8
* mean F1-score instead of OA; ** OA influenced by high dominance of one class.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lechner, M.; Dostálová, A.; Hollaus, M.; Atzberger, C.; Immitzer, M. Combination of Sentinel-1 and Sentinel-2 Data for Tree Species Classification in a Central European Biosphere Reserve. Remote Sens. 2022, 14, 2687. https://doi.org/10.3390/rs14112687

AMA Style

Lechner M, Dostálová A, Hollaus M, Atzberger C, Immitzer M. Combination of Sentinel-1 and Sentinel-2 Data for Tree Species Classification in a Central European Biosphere Reserve. Remote Sensing. 2022; 14(11):2687. https://doi.org/10.3390/rs14112687

Chicago/Turabian Style

Lechner, Michael, Alena Dostálová, Markus Hollaus, Clement Atzberger, and Markus Immitzer. 2022. "Combination of Sentinel-1 and Sentinel-2 Data for Tree Species Classification in a Central European Biosphere Reserve" Remote Sensing 14, no. 11: 2687. https://doi.org/10.3390/rs14112687

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop