Next Article in Journal
Effect of Coupling Treatment on Interfacial Bonding Properties of Wood Veneer/Wood Flour–Polyvinyl Chloride Composites with Sandwich Structure
Previous Article in Journal
Potential of Carbon Offsets to Promote the Management of Capercaillie Lekking Sites in Finnish Forests
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Forest Gap Extraction Based on Convolutional Neural Networks and Sentinel-2 Images

1
Co-Innovation Center for Sustainable Forestry in Southern China, Nanjing Forestry University, Nanjing 210037, China
2
College of Forestry, Nanjing Forestry University, Nanjing 210037, China
*
Author to whom correspondence should be addressed.
Forests 2023, 14(11), 2146; https://doi.org/10.3390/f14112146
Submission received: 4 October 2023 / Revised: 20 October 2023 / Accepted: 27 October 2023 / Published: 28 October 2023
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)

Abstract

:
As a type of small-scale disturbance, forest gap and its accurate extraction are of great significance to monitor forest long-term dynamics, to choose forest recovery mode and to predict forest recovery pace. Currently, airborne LiDAR and high-resolution multi-spectral data are commonly used to accurately classify forest gaps, but they are costly to acquire and have limited time and space availability. In contrast, the Sentinel-2 multi-spectral data with a 10 m spatial resolution overcomes these drawbacks in forest gap extraction. In this work, an integrated framework that combines multi-source remote sensing, machine learning and deep learning to extract forest gap in wide regions was proposed and tested in three sites. First, LiDAR, Sentinel series and random forest (RF) algorithm were synergized to produce a canopy height product in model training site. On this basis, samples for forest canopy, forest gap and non-such were identified from LiDAR-derived canopy height model (CHM) and Sentinel-based canopy height inversion (HI) data to train forest gap extraction models by applying the Deep Forest (DF) and Convolutional Neural Networks (CNN) algorithms, followed by a comparison of the accuracy and the transferability among the four models (DF-CHM, DF-HI, CNN-CHM and CNN-HI). The results indicated that the R2 and RMSE of Sentinel-based canopy height retrievals were estimated at 0.63, and 7.85 m respectively, the difference in the mean height and standard deviation between HI and CHM was 0.03 m and 4.7 m respectively. And there was a spatial agreement of about 98.60% between the HI-identified samples and the CHM-identified samples, with an agreement of 54.89% for the forest gap class. The CNN-HI model had the highest accuracy in both transfer learning test sites, with an overall accuracy (OA) of 0.85 and 0.87, Kappa coefficient at 0.78 and 0.81, respectively, proving that it has good transferability. Conversely, the DF-based models generally gave poorer accuracy and transferability. This study demonstrates that combining Sentinel-2 multi-spectral data and CNN algorithm is feasible and effective in forest gap extraction applications over wide regions.

1. Introduction

Diverse forest disturbances taking place in different spatial scales are prevalent in forests, constantly altering the species composition and stand structure of forests [1]. As a common small-scale disturbance in forests, post-disturbance shaped forest gap can substantially affect light intensity in the understory, which in turn changes plant composition of forest communities, promotes forest community succession and ecosystem development, and contributes to the growth and regeneration of trees and ground cover in forests [2,3,4,5,6]. Conceptually, a forest gap is defined as an open space with limited size in forest canopy that occurs after the death of one or more trees due to fire or pests or disease, etc. [7,8,9,10], acting as one of the most important indicators of forest ecosystem structure dynamics [11,12], also one of the most important stages of forest regeneration [13,14], as well as a core concept of forest cycle theory [15]. Therefore, forest-gap-related studies have already been an indispensable part of long-term dynamics in forest ecosystems. Obviously, an accurate classification or extraction of forest gap from complex forest environments is the prerequisite for the studies.
Traditional forest gap extraction is based on on-site survey, which usually requires massive manpower and material resources, and the complex topographic conditions in some remote and steep forested regions may also lead to poor accessibility to limit the field survey of forest gap. Furthermore, this manner has extra shortcomings such as small area covered and spatial discontinuity [7,16], poor repeatability and timeliness over wide forested regions [17,18]. Thus, this method is no longer prevalent in the current context of rapid development of remote sensing technology [19]. At present, using airborne Light Detection And Ranging (LiDAR) point cloud data to extract forest gap with height thresholds, using high-resolution multi-spectral data to classify forest gap, and synergizing the two types of data for forest gap classification have already replaced the traditional field survey method as the mainstream [20,21].
However, LiDAR data has the disadvantages of high acquisition cost and limited spatio-temporal coverage [22,23], and current related studies have shown that using LiDAR data for forest gap extraction leads to an underestimation of the forest gap area [24], and the varying height threshold value also affect the accuracy of forest gap extraction greatly [25,26], therefore, airborne LiDAR is not suitable for the classification of forest gaps in large areas. High-resolution multi-spectral data can contribute to an accurate extraction of forest gaps through its spectral and textural features [27], but it has the disadvantage of high acquisition cost as well. In comparison, medium-resolution multi-spectral data, e.g., Sentinel-2 MSI has multi-spectral and textural features of forest canopies that LiDAR data does not have, particularly its policy on free use of image data, rich data sources and high temporal resolution enable forest gap classification in wider regions and longer time frames [18]. But not all medium-resolution data is applicable for extracting them. Since the definition of forest gap is highly dependent on the scale of gap area or size, both too large or too small gaps in the canopy cannot be regarded as forest gaps [28,29,30], so the 30 m resolution data or coarser multi-spectral data, e.g., Landsat is not suitable for the extraction study of forest gaps compared to 10 m resolution data, because the area of a single Landsat pixel is close to the upper limit of the area threshold of a forest gap [31]. Although medium-resolution multi-spectral data is limited by its inability to extract small-sized forest gaps, its advantages mentioned above are inspiring. Even so, only a limited amount of studies have used medium-resolution multi-spectral data for forest gap classification [32,33], but they have shown the possibility to observe the effectiveness of forest management operations in a timely and low cost manner. Although these studies have already demonstrated the ability and potential of medium-resolution multi-spectral data in extracting relatively large-sized forest gaps, the extraction accuracy may be further escalated if other proper classification features and advanced algorithms are applied.
Most of the forest gap extraction studies based on airborne LiDAR and high-resolution multi-spectral data have used the object-based image analysis (OBIA) approach to segment then classify forest gap [27,34,35]. These studies are premised on accurate segmentation and extraction of forest gap boundaries, and therefore rely heavily on high- quality CHM data derived from high-accuracy LiDAR data, thus, their classification accuracy of forest gaps may be reduced in the absence of high-accuracy CHM data [36]. From another perspective, it may can be considered as that forest gap patches segmented from high-accuracy CHM data were used as reliable “samples” to support subsequent forest gap classification combined with multi-spectral features. But when only multi-spectral data is used for segmentation, the reliability of “samples” is relatively low, which would further accumulate subsequent forest gap classification errors. Therefore, we can try to use samples obtained from CHM data to build a multi-spectral image-based forest gap classification model, but we must consider its high cost and limited spatio-temporal availability of CHM data.
Fortunately, there is another way to derive canopy height data at a relatively low cost. Combining LiDAR-based CHM data or field observed forest canopy height values, as the actual measurements for model training, with multi-spectral data and SAR data to invert forest canopy height over wide regions is one of the most widely used economical methods today [37]. Ghosh et al. [38] demonstrated the usefulness of Sentinel series data in canopy height inversion by combining Random Forest (RF) and Symbolic Regression models. Deng’s study [39] demonstrated that machine learning was more accurate than traditional canopy height inversion methods such as the coherent magnitude method and the geopotential method and overcame the limitation of needing to rely on fully polarized data. Meanwhile, most of the studies on forest canopy height inversion using SAR data or other sources of 3D data had shown that the RF model was the most effective one in height inversion among many machine learning algorithms [40,41]. It had also been observed that tree height correlated well with backscattering coefficients and interferometric coherence features calculated from SAR data, fractional vegetation cover (FVC) and leaf area index (LAI), thus, these feature variables could be used to improve the accuracy of canopy height retrievals [38,42,43,44]. Additionally, the wetness component can reflect the density, developmental stage and moisture content of vegetation canopy [45], helping to distinguish forest canopy from non-forest canopy areas.
In terms of classification algorithms, most of the current studies have used only machine learning algorithms, which have been demonstrated to be effective for forest gap classification based on both high- and medium-resolution multi-spectral data [18,27,32,35,36]. Additionally, a couple of deep learning algorithms including models such as VGG16, ResNet152V2, long short-term memory (LSTM) and 2D-CNN have been widely used in multi-spectral image classification [46,47,48]. Among the algorithms above, CNN has been proven to perform well in computer vision tasks, such as image classification, object detection, and segmentation, due to its ability to capture local spatial relationships through convolutional operations [49]. Further, since vegetation remote sensing has multi-temporal and multi-modal characteristics, combining data from multiple sensors or acquisition dates for vegetation analysis has often been a technical challenge. The modularity of the CNN framework facilitates the combination of multi-dimensional data, and thus offers significant advantages in vegetation-related remote sensing work, such as the detection of individual plants or the pixel-wise segmentation of vegetation classes due to its powerful ability to extract information from spatial data, in contrast to other machine learning algorithms, such as support vector machine and K-means, which require a process of feature selection to avoid information redundancy. CNN has the ability to filter and learn relevant features by iteratively optimizing the transformations during the training process [50]. For example, Boston et al. [51] used a CNN algorithm to implement a land cover classification, Li et al. [47] compared the accuracies of various CNN and LSTM models for crop classification work, and these results showed that CNN models consistently performed better. Additionally, combining high-resolution multi-spectral data with Mask R-CNN algorithms have great potential in extracting forest gaps [52]. Obviously, these machine-learning-based and deep-learning-based efforts have had varying degrees of success in different application fields, but they did not test the transferability of the proposed models in remote sensing image classifications, let alone forest gap classification. Based on these endeavors, we assume that medium-resolution multi-spectral data coupled with deep learning, such as 2D-CNN also have such potential in forest gap extraction, but which deserves further tests.
The major objective of the work was to propose a framework that integrates medium-resolution multi-spectral data and limited LiDAR data with deep learning algorithms to construct forest gap extraction models. Two major issues, including: (1) whether the canopy height data retrieved from Sentinel series data can effectively provide samples for forest gap classification, and (2) whether the transferability of the proposed forest gap classification model is acceptable, would be carefully tested in the current work.

2. Materials and Methods

2.1. Study Sites

This study contains three study sites. The SC_CA site is located in the mountainous area of Santa Cruz, California, USA, with an area of 2896.10 ha, and the area contains the Forest of Nisene Marks Skatepark, with annual precipitation of about 548.64 mm, and the Soquel Demonstration State Forest, with annual precipitation of about 1207.2 mm; the SW_OR site is located in southwestern Oregon near Dunes City, with an area of 897.81 ha and an annual precipitation of about 965 mm; and the BC_WA site is located on Bear Creek Mountain in Washington State, with an area of 796.49 ha and an annual precipitation of about 459 mm. Figure 1 shows the specific locations of the three study sites and the corresponding examples of available active and passive remote sensing data or product.

2.2. Data Acquisition and Pre-Processing

2.2.1. LiDAR Data

The LiDAR data used in this study was downloaded from the open-source data website OpenTopography (https://portal.opentopography.org/ (accessed on 16 June 2023)). We used High Resolution Topography near Santa Cruz, CA 2017 collection [53], the survey date was 12–14 October 2017, and the point density was 48.77 pts/m2. The pre-processing of the data included de-noising and ground point classification by using LiDAR360 V6.0.4.0 software.
The LiDAR data for SC_CA site was high quality and its derived CHM was used as the training and validation data for building the canopy height inversion model and assessing the effectiveness and reliability of the Sentinel-based height inversion-identified samples.

2.2.2. Sentinel Series Data

Sentinel series images were downloaded from the ESA Copernicus Data Center (https://scihub.copernicus.eu/dhus/#/home (accessed on 16 June 2023)). Table 1 and Table 2 display the basic information of the involved Sentinel-1 and Sentinel-2 images, respectively. For model training purposes, the original Sentinel-1 and Sentinel-2 images were selected by following the criterion that the acquisition dates of the Sentinel images were as close to the acquisition dates of the LiDAR data as possible to avoid apparent forest status changes depicted by the temporally imperfect consistent three types of images taking place. As a result, the selected Sentinel-2 images were cloud-free, and the acquisition date of all the Sentinel series images was controlled to be within 10-day time frame of the corresponding LiDAR data collection dates in the SC_CA site (Table 1 and Table 2). To adequately test the transferability of the established models applied to the other two sites, the selected Sentinel-2 images to be classified were different from the LiDAR data in terms of the acquisition month and year (Table 2).
The Sentinel-1 GRD and a pair of SLC data (Table 1) was used to calculate the backscattering coefficient and interference coherence characteristics for the SC_CA site, respectively, therefore the pre-processing flow of the two types of data were different (Figure 2).

2.2.3. Google Earth Maps

The temporally corresponding 1.0–1.5 m resolution Google Earth Maps were downloaded free from the Google Earth Pro platform, and they were visually interpreted to create the reference data for validating the classification results of each site. All the Google Earth Maps were acquired within a 13-day time frame of the corresponding Sentinel-2 images acquisition date (Table 3), and were geo-registered based on the Sentinel-2 images.

2.2.4. ALOS DEM Data

The ALOS DEM data (Table 4) were downloaded free of charge from the NASA earth data website (https://search.asf.alaska.edu/#/ (accessed on 16 June 2023)) at a resolution of 12.5 m, and was also resampled to 10 m.

2.3. Methods

2.3.1. Definition of Forest Gap

Clarifying the definition of a forest gap is the prerequisite for building a forest gap extraction model. Considering the objectives of this study and the characteristics of Sentinel series data, it was first necessary to develop a definition of forest gap suitable for the data characteristics. Since there are still some controversies over the upper and lower limits of forest gap size and the current widely accepted definition is that the maximum area of a forest gap does not exceed 1000 m2 [28,31,54], and the minimum area is 4 m2 [29,55], the height of vegetation under the gap can be determined according to the height of surrounding trees [8,56]. Combining above-mentioned area range of forest gap with the spatial resolution of the multi-spectral data used in this study, we finally determined a forest gap as an open space in forest canopy with an upper area limit of 1000 m2 and a lower area limit of 100 m2, and the upper height limit of the vegetation under the forest gap would be determined by the relative height threshold according to the forest height situation in different study sites.

2.3.2. LiDAR-Based CHM Development

The DSM and DEM were computed from the pre-processed ALS point cloud data mentioned in Section 2.2.1, and a 1 m resolution CHM was derived via subtracting the DEM from the DSM. In order to match the spatial resolution of the multi-spectral data used in the work, the CHM data were accordingly resampled to 10 m resolution. We first filtered the CHM data by visual interpretation and cropped those areas with missing data to eliminate height errors in subsequent processing. Next, we calculated the average value of the cropped data over a 10 × 10 m area to generate a 10 m resolution CHM.

2.3.3. Feature Extraction Based on Sentinel and DEM Data

We used Sentinel-1 GRD data to extract the backscattering coefficients in the form of decibels in the direction of VV and VH polarization, and a set of SLC data at the same location at different times were used to calculate the interferometric coherence features.
Sentinel-2 data were used to calculate vegetation indices, tasseled cap transformation components and biophysical variables. Vegetation indices included the normalized vegetation index (NDVI), enhanced vegetation index (EVI), and ratio vegetation index (RVI). The Sentinel-2 tasseled cap transformed components including the brightness, greenness, and wetness, and their calculations were executed by following Nedkov et al.’s study [45]. The specific calculation formula of the above characteristic variables are as follows:
N D V I = ρ N I R ρ R ρ N I R + ρ R
R V I = ρ N I R ρ R
E V I = 2.5 × ρ N I R ρ R ρ N I R + 6.0 × ρ R 7.5 × ρ B + 1
σ ° = | D N | K sin α
σ ° ( dB ) = 10 log 10 σ °
γ ^ = | C 1 C 2 * | ( C 1 C 1 * C 2 C 2 * )
B r i g h t n e s s = 0.0356 × B 1 + 0.0822 × B 2 + 0.1360 × B 3 + 0.2611 × B 4 + 0.2964 × B 5 + 0.3338 × B 6 + 0.3877 × B 7 + 0.3895 × B 8 + 0.0949 × B 9 + 0.3882 × B 11 + 0.1366 × B 12 + 0.4750 × B 8 A
G r e e n n e s s = 0.0635 × B 1 0.1128 × B 2 0.1680 × B 3 0.3480 × B 4 0.3303 × B 5 + 0.0852 × B 6 + 0.3302 × B 7 + 0.3165 × B 8 + 0.0467 × B 9 0.4578 × B 11 0.4064 × B 12 + 0.3625 × B 8 A
W e t n e s s = 0.0649 × B 1 + 0.1363 × B 2 + 0.2802 × B 3 + 0.3072 × B 4 + 0.5288 × B 5 + 0.1379 × B 6 0.0001 × B 7 0.0807 × B 8 0.0302 × B 9 0.4064 × B 11 0.5602 × B 12 0.1389 × B 8 A
where ρNIR, ρB and ρR are the reflectance of the near infrared band, blue band and red band of Sentinel-2 data, respectively, σ° is the backscattering coefficient, DN is the intensity value, K is the absolute scaling coefficient, and α is the local incidence angle of the radar image, σ°(dB) is the backscattering coefficient in decibels, γ ^ is the interferometric coherence, | | indicates the absolute value, * indicates the complex conjugation and angle brackets, < > represents the ensemble statistical average operation, C1 and C2 are the master (the reference) and slave (the repeat) SLC images, respectively, B1 to B12 and B8A correspond to the reflectance of the coastal aerosol, blue, green, red, vegetation red edge 1, vegetation red edge 2, near infrared narrow 1, near infrared, shortwave infrared 1, shortwave infrared 2 and near infrared bands of sentinel-2 images, respectively.
Biophysical variables included the FVC and LAI. The tasseled cap transform components and biophysical variables were directly calculated by using a SNAP client.
ALOS-DEM data were resampled to 10 m then used to generate slope and aspect information by using ArcGIS software.
Texture features were derived from the grey level co-occurrence matrix (GLCM) texture extraction strategy with a window size of 3 × 3 and a direction of 45° based on the backward scattering coefficients in the VV and VH polarization directions first, and the multi-spectral image texture features were computed from the Smallest Univalue Segment Assimilating Nucleus (SUSAN) edge detection results. The SUSAN edge detector is an efficient edge and corner detection method which can stably detect the corner and boundary points of targets in image with structure-preserving noise reduction, it can effectively extract the texture features of the forest gap edges [57]. All the features derived from Sentinel-1, Sentinel-2 and DEM were regarded as the potential correlated variables to forest canopy height retrieval and forest gap identification.

2.3.4. Canopy Height Inversion

We assumed the LiDAR-based CHM data to be the actual tree height data at first, which was randomly divided by a 7:3 ratio and used as the training and validation set accordingly. Then, we applied the feature importance ranking function of RF algorithm to select those most important features from all the above-mentioned potential features set as the independent variables, and implemented the RF regression algorithm to build a canopy height inversion model. When developing the RF model, we adjusted some modelling parameters, including max_features and n_estimators, to achieve the best inversion results.

2.3.5. Classification Sample Production

The height threshold of the forest gap needed to be determined before creating the classification samples. By counting the number of pixels at different heights, the height corresponding to the first minimal value of the number of pixels with a height greater than 2 m in the histogram was specified as the height threshold of the forest gap, which was used to preliminarily separate the forest canopy from non-forest canopy areas. Thus, we divided forest canopy from non-forest canopy areas by using the relative height thresholds based on LiDAR-based CHM and the Sentinel series data retrieved HI data in the SC_CA site, respectively, then used the area threshold to divide the non-forest canopy areas into forest gaps and non-such areas. As a result, a total of three classes including forest canopy, forest gap and non-such were separated, and corresponding training and validation samples were created from the CHM and HI data. Based on the CHM-derived samples, the effectiveness of extracting training samples from the HI data were evaluated by calculating a spatial agreement index between these two suites of samples.

2.3.6. Forest Gap Extraction Modelling

Based on the samples created in Section 2.3.5, after performing feature selection by the importance ranking of RF algorithm, we implemented Deep Forest (DF) algorithm and Convolutional Neural Networks (CNN) algorithm to classify those identified three classes in Section 2.3.5.
DF model is a decision-tree-based deep model, with key features of a deep learning model, such as layer-by-layer processing, in-model feature transformation and sufficient model complexity. Figure 3 illustrates the framework of DF of the current analysis (adapted from Hao et al. [58]), where inset (a) shows the feature re-representation using sliding window scanning, and inset (b) shows the cascade forest structure. A sliding window in m dimension was applied to divide the ten dimensional raw data into (12 − m + 1) × m dimensional features, then the first cascade took the g (2 × (12 − m + 1) × m) dimensional feature probability vectors generated by the multi-granularity scanning module as input, processed them and generated the k-class enhanced feature probability vectors using two kinds of forests (random forest and completely-random tree forests), respectively, averaged the k-class enhanced feature probability vectors with the g-class original vectors and input them into the next cascade again, and took the probability maximal class vectors as outputs when there was no enhancement of the new performance gain of the vectors. Thus, this model is able to achieve excellent performance with fewer hyperparameters compared to a deep learning algorithm [59], which can be considered as a machine learning model with a deep learning structure. For the DF model, we adjusted the n_estimators parameter for different datasets and limited the number of canopy pixels involved in training.
CNN is a network structure model including the input layer, convolutional layer, pooling layer, dense layer and output layer [60], and there were many classical CNN models like VGG, ResNet, GoogLeNet and AlexNet, etc. [50], which have been widely used in the research of image classification [61,62,63]. CNN has the advantages of local connection, weight sharing, pooling operation and multi-layer structure [64], and can achieve good learning ability through multi-layer nonlinear transformation in deep structure [65], which can initially be grouped into 1D-CNN, 2D-CNN and 3D-CNN based on the dimensionality of the kernel, among which 2D-CNN was mostly used in the studies related to imagery [50]. Therefore, we used the 2D-CNN model to classify the forest gap. We adjusted some parameters including the filter size, strides, kernel size, padding, activation and dropout rate.
Figure 4 shows the structure and parameters of our 2D-CNN model used in the analysis.
For the training samples used for both models, the number of canopy and non-such class samples had been limited in view of the relatively small amount of forest gap samples, the samples of forest gap, non-such and forest canopy classes were randomly selected at a ratio of 1:3:7 in an attempt to give an unbiased training. Of which, 70% of the samples were used for model training and the rest 30% of the samples were used for validation. Once the classification models passed the validations conducted in the SC_CA site, they were directly applied to the SW_OR and BC_WA sites to test their transferability.

2.4. Accuracy Verification

For the Sentinel-retrieved canopy height product, we mainly assessed its reliability characterized by the statistics of R2 and RMSE, derived from comparing it with the corresponding Lidar-based CHM validation dataset. We also plotted the distribution maps of non-forest canopy areas of CHM, HI data by extracting the pixels with height value smaller than the relative height threshold, respectively, then compared the spatial consistency of non-forest canopy areas extracted from the two suites of data by setting the Lidar-derived results as the reference.
For Sentinel-based sample production, we mainly evaluated the spatial consistency between the CHM-identified three classes of samples and HI-identified three classes of samples, with an emphasis on the class of forest gap.
For the forest gap classifications, we randomly selected 60, 80 and 60 sample points for forest gap, forest canopy and non-such classes, respectively, and visually interpreting the temporally corresponding Google Earth Maps as reference data to validate the classifications was completed to derive statistics including OA, UA, PA and Kappa coefficient.
Figure 5 shows the overall workflow of this study:

3. Results

3.1. Feature Selection

The result of feature importance ranking for building the canopy height inversion model was shown in Figure 6. We comprehensively considered the importance ranking results and the actual inversion performances of all the feature variables, as well as the practicality of the model. Finally, a total of 10 feature variables including the backscattering coefficients (VV, VH polarization direction) and their GLCM-mean textures, interferometric coherence features, FVC, LAI, wetness and SUSAN texture features were identified to support the development of the canopy height inversion model.
Figure 7 shows the result of feature importance ranking for building the forest gap extraction models. Texture features were excluded first because the image texture information contributed less to the pixel-based classification methods we used than the OBIA methods for high-resolution data, which was proved by Jin’s [66] study. Based on the ranking results and the practical validation results, as well as synthesizing the findings of others [18,27,67,68,69,70], we finally determined that the features used for forest gap classification included four multi-spectral bands (blue, green, red and NIR), two terrain feature variables (slope and aspect), three vegetation index features (NDVI, RVI and EVI) and three tassel-cap transform components (brightness, greenness and wetness).

3.2. Canopy Height Inversion

The best inversion result was obtained when setting the max_features to 4 and n_estimators to 1500, and the corresponding R2 and RMSE of the training set for SC_CA site were at 0.78 and 6.01 m, and the validation R2 was at 0.63, with a RMSE at 7.85 m. Figure 8 and Table 5 illustrate the difference between the CHM data and the HI data. It can be seen from Figure 8 that the HI data were generally consistent with the CHM in space, with a similar spatial distribution pattern of low values and high values (shown as black and white), while the difference between the medium-height areas was not as obvious as the CHM data.
The model overestimated the height of areas with no or few vegetation coverage while the canopy areas with height greater than 36 m were underestimated by the model. Pixels with height less than 2 m accounted for 1.31% in the CHM data, while 0.14% in the HI data, and pixels with height greater than 36 m account for 37.18% in the CHM while 29.66% in the HI. Table 5 also confirms the above-mentioned differences, with almost no difference between the minimum value and the mean value, but the maximum value and the standard deviation of HI data were 5.33 m and 4.70 m less than the CHM data, respectively.

3.3. Forest Gap Samples

Figure 9 shows the extraction results of the non-forest canopy areas from the CHM and HI data. The non-forest canopy areas simultaneously extracted from both CHM and HI maps totaled to 4998 pixels, and another 3371 non-forest canopy pixels were extracted from the CHM map only but the HI data identified them as forest canopy pixels, and extra 686 non-forest canopy pixels were extracted from the HI map only but the CHM identified them as forest canopy pixels. The spatial agreement between the CHM-identified non-forest canopy pixels and the HI-identified non-forest canopy pixels was at 59.72%. This indicated that the HI data can effectively extract non-forest canopy pixels and produce samples when height inversion value was relatively inaccurate (Figure 8 and Table 5).
The CHM and HI-identified samples for the SC_CA site were produced based on relative height thresholds and area threshold, respectively (Figure 10), then we compared and analyzed the spatial distribution and number of pixels of each class. The number of pixels of all the three classes produced from the HI data were basically consistent with the CHM-identified samples. The pixel quantity of different classes was shown in Table 6. The statistics results showed that the overall spatial agreement between the CHM-identified samples and HI-identified samples reached to 98.60%, and the spatial agreement of the canopy, gap and non-such classes were at 99.76%, 54.89% and 61.63%, respectively.
As shown in Figure 10, the HI-identified samples had fewer pixels incorrectly labelled as forest gaps but the total number of forest gap pixels was less than the CHM-identified samples. However, our HI data were relatively inaccurate compared to the CHM, which still could be used to divide forest gap areas effectively to produce samples to train the forest gap extraction models. Finally, 18,249 and 11,573 pixels of the three classes were randomly selected from all the CHM-identified samples and HI-identified samples, respectively, to train and validate the classification models.

3.4. Accuracy of Forest Gap Extraction

We performed four forest gap extraction models and verified the accuracy of the classification results, respectively. Figure 11 illustrates the final classifications of the four models, and Table 7 displays the validation accuracy statistics of the four models in the SC_CA site.
Comparing the classification results of the four models in the SC_CA site (Table 7), the overall classification accuracies were generally good, with the OA greater than 0.80, the Kappa coefficient greater than 0.69, the PA greater than 0.86, except that the UA of the CNN_CHM and DF_HI model was 0.58 and 0.55, respectively, which indicated that there were more misclassified forest gap pixels in the classification results of these two models.
From Figure 11e,g we can see that the coverage of some non-forest canopy area was slightly larger than that can be observed from the Google Earth Maps, and there were more misclassified forest gaps in Figure 11g. Figure 11f also shows that some edge pixels of the non-such areas were misclassified as forest gaps, which was consistent with the UA in Table 7, as well as Figure 11g. The classification accuracy of the DF_CHM model was the highest, and the forest gaps and non-such areas have been effectively extracted from the canopy area (Figure 11h).
It can be concluded from the above results that the DF_CHM model performed best when classifying in SC_CA site, followed by the CNN_HI model. Different classification algorithms and samples basically have little influence on overall classification accuracies.
Figure 12 and Figure 13 show the results of forest gap classification in the SW_OR and BC_WA sites after transferring the well-established classification models of the SC_CA site to the other two sites, and Table 8 summarizes the validation accuracy statistics of the classifications in the two sites, respectively.
As shown in Figure 12 and Figure 13, there were a large number of forest canopy pixel misclassified into forest gap and non-such classes in the classification results of the DF-based models, while the classification results of CNN-based models were basically consistent with the actual ground objects, but the CNN-CHM model misclassified more forest gap pixels than the CNN-HI model, and most of the misclassified forest gap pixels were located at the edge of non-such areas.
Comparing the forest gap classification accuracies of CNN-HI and CNN-CHM models (Table 8), the OA and Kappa coefficient of CNN-HI model were above 0.85 and 0.78, respectively, which was higher than that of CNN-CHM model, which indicated that the CNN-HI model performed better in overall ground object classification. The UA and PA of forest gap class were both greater than 0.82 when the CNN-HI model was used for classification, while the maximum UA and PA were only 0.58 and 0.66, respectively, when the CNN_CHM model was used, which also indicated that that HI samples can improve the model classification accuracy more than CHM samples.
The classification results of CNN-based models above indicated that our classification results were highly consistent with the ground objects, which proved that the CNN-based models had excellent transferability and were able to classify forest gaps effectively. However, there was only one region where its classification accuracy was close to CNN models when using the DF-HI model (Figure 12a,f). For other classification results of the DF-based models, the incorrect and missed pixels generally accounted for more than half, which indicated that the classification results of the DF-based models had general or even very low consistency with the actual ground objects, and their transferability was far less than that of the CNN models.

4. Discussion

4.1. The Reliability of the HI Data and Its Influence on Subsequent Classification

The difference between LiDAR-derived CHM data and Sentinel-based HI data was an important factor affecting the classification model building and gap extraction accuracy. Compared to the LiDAR-based CHM data with a canopy height range from 0 to 79.64 m, the range of Sentinel-based canopy height inversion values was slightly narrower (0.22 to 71.29 m), but the mean height value was very close to each other (Figure 8, Table 5). Liu et al. [43] also used 10 m resolution data for canopy height inversion, and they found that the underestimation occurred when the mean canopy height exceeded 27 m. In our work, the underestimation also occurred when the mean canopy height exceeded 36 m, which is basically consistent with Liu et al.’s observation. Meanwhile, about 90% of the pixels with height less than 2 m were overestimated, which also led to a decrease in the accuracy of the inversion results, but our height inversion data can still be used to extract these non-forest canopy areas based on the relative height threshold. Luo et al.’s study [40] showed that using a median image composition of the Sentinel-2 data over a period of time may be able to improve the spatial consistency between the Sentinel series data derived HI data and LiDAR-derived CHM data. However, this method was not suitable for our work because we needed to use the temporally adjacent Sentinel images of the Lidar observation to implement image classification later on, which requires that as few land cover change events occur during this time frame.
Even though the HI data and CHM data were broadly similar, the samples extracted from the two data had a spatial agreement of 98.60%, but the performance of the two suites of samples was still different in the actual training and validation results. In terms of sample quality, some fore-forest gap pixels were labeled as forest gap in the CHM samples and because the canopies and fore-forest gaps had very similar spectral features, resulting in more canopies being misclassified as forest gaps. In contrast, the misclassification of canopy pixels into forest gaps when using HI samples rarely occurred, thus, the classification accuracy, especially the UA of forest gap class was improved. This is because the fore-forest gap pixels with spectral characteristics similar to the forest canopy pixels were assigned higher values in the HI data, so that these pixels were not screened out by the height thresholds and labeled as forest gap, thus, the HI-identified forest gap samples were relatively purer, which improved the classification accuracy. Similarly, we chose the Google Earth Maps to be visually interpreted as the main reference data. The canopy pixels and the fore-forest gap pixels were very spectrally similar and could not be directly distinguished as well. The above difference was one of the reasons why the validation results showed that the forest gap classification accuracy of the HI-based model was higher than that of the CHM-based model.

4.2. Advantages and Potential of the Forest Gap Extraction Model

Our forest gap classification accuracy was similar to or even slightly higher than that of other studies that also used medium-resolution multi-spectral data for forest gap classification. For example, Barton et al. [32] used Sentinel-2 data to map treefall gaps, and its area evaluation produced less than 10% overestimation. Zhu et al. [33] used KeyHole data and SPOT data to extract forest gaps by using unsupervised classification method, the OA ranged from 0.50 to 0.95, and the Kappa coefficients ranged from 0.30 to 0.72, where the classification results were considered valid when the Kappa coefficient was higher than 0.60.Additionally, results of the current studies showed that the OA of forest gap classification using LiDAR-derived CHM data alone ranged from 0.77 to 0.92, and the Kappa coefficient ranged from 0.74 to 0.8 [11,17,24,25,71]. These comparisons suggest that without regard to resolution limitations, our classification accuracy shows the potential of combining medium-resolution multi-spectral data with deep learning algorithms in forest gap classification.
Table 8 compares the accuracy validation results between the DF models and the CNN models. It was confirmed that the transferability of the CNN models was better than that of the DF models. We speculate that this is because although the model construction idea of deep forest is similar to that of deep learning model with multiple levels, it is still a tree-based machine learning model in essence, its performance in transfer learning cannot be comparable to that of deep learning model, even though it was proven to perform well in the case of few-shot learning [59], and other studies have also only demonstrated the effectiveness of machine learning algorithms in a single region [18,27,32,35,36]. However, considering the results shown in Figure 11 and Table 7, the performance of DF models in a single region was comparable to that of CNN models, but the transferability was very poor, we assumed that there may be overfitting in DF models [72], undermining the accuracy of transfer learning. There was a Dropout layer in the CNN models, which served to reduce the risk of overfitting by letting a neuron stop working with a certain probability p during forward propagation in each training batch to make the model more generalized [73]. In contrast, the class vectors produced by each forest in the DF model were generated by k-fold cross-validation as a way to reduce the risk of overfitting [59]. The principles of the two models were different, and the classification results showed that the Dropout layer performed better in preventing overfitting. At present, there was a study showing that the modified DF model can achieve transfer learning [74], which needs to be further tested and verified in the coming work.
In an attempt to improve the classification accuracy, further adjustments can be made to the model parameters and feature variables involved in the classifications. In addition, training the CNN model requires a sufficient number of samples to ensure the model accuracy, which is also the problem of deep learning algorithms. Compared with other classes, there were fewer forest gap samples in the current work, which might lead to an inadequate deep learning model training to potentially lower the accuracy of the classifications.
Some pixels located in the edge of large canopy opening areas (identified as non-such class) were misclassified as forest gap class (Figure 11f and Figure 12g), which double confirmed the limitations of the pixel-based classification method. This method only considers the feature values of one single pixel, without taking into account the spatial relationship among adjacent pixels. By setting the restrictions such as distance between adjacent pixels and the area threshold of the same class pixels, this method may be able to further optimize the classification results.
As shown in Figure 12 and Figure 13, the boundaries of some non-forest canopy areas were slightly displaced compared with the boundaries shown on Google Earth Maps. This is basically due to the slight difference in the ground object boundaries between Sentinel-2 images collected at the resolution of 10 m and meter-level Google Earth Maps, which also has an impact on geo-registration, and the difference of image acquisition time resulting in different shadow ranges of forest canopy, which makes it difficult to identify the boundaries of non-forest canopy areas during visual interpretation. We believe that if only a single data source is used in subsequent research, such geometric errors can be minimized.

4.3. Inadequacies and Prospects

Due to the fact that the feature variables used for modelling were extracted from Sentinel-2 data, the currently established forest gap classification model was only applicable to Sentinel-2 data, and adjustments in feature selection can be made in the future so that other similar medium-resolution data can also be applied to the classification model, such as the SPOT data.
In terms of modelling algorithms, we have only tested the CNN algorithm, and we need to use other deep learning algorithms for modelling to potentially improve the classification accuracy in the future. For example, Jagannathan et al. [75] used a combination of multiple deep learning algorithms to predict and classify the land use and land cover changes, and the final classification accuracy reached 98.5%. This effort points out a direction for improvement in forest gap classification. In addition, the insufficient number of samples for the forest gap class was still a problem that needs to be paid more attention to. We hope that we can develop a classification model that requires relatively less samples, or by extrapolating the canopy high inversion model properly to obtain more available sample data [76]. Zhu et al. [77] developed a novel deep learning framework for heterogeneous remote sensing image change detection and achieved satisfactory classification results even in the case of few-shot learning, which also gives us an insightful observation to improve our classification model of forest gap.4.4 Application of Large-sized Forest Gap Maps
The distribution map of forest gaps ranging from 100 to 1000 m2 in size can be widely used in the field of sustainable forest management and analysis related to forest succession stage, restoration and renewal: (1) obtaining long-term forest gap dynamics information in a relatively low-cost way by implementing our deep learning classification model to multi-temporal Sentinel images to provide data support for sustainable forest management [32,78]; (2) combining with the terrestrial inventory information to infer the driving factors causing the formation of forest gaps, namely attribution analysis [67]; (3) evaluating the restoration status of forest structure driven by gaps and predict the probable succession time [33]; (4) analyzing the regeneration patterns of tree species under the canopy gaps and large openings, and the factors that affecting regeneration [79,80].

5. Conclusions

In this study, we developed an integrated framework that combines multi-source remote sensing data and deep learning algorithm to extract forest gaps, which is of great significance for the study of forest regeneration and succession trends. First, airborne LiDAR and Sentinel series data were used to map the canopy heights (HI) by using random forest modelling to support an effective and reliable identification of classification samples in wide regions where airborne LiDAR data is unavailable. Next, the CNN-CHM, DF-HI and DF-CHM models were compared and evaluated in terms of their forest gap classification accuracies and transferability. The CNN-HI model was proved to have the best transportability, and it was also able to effectively classify forest gaps when there were significantly fewer forest gap class samples in the HI sample than in the CHM sample, and its classification accuracy was even higher than that of the CNN-CHM model. This study demonstrates the potential of combining medium-resolution multi-spectral data with CNN algorithms for large-scale forest gap extraction, which can provide a reference for mapping long time-series disturbances and updates of large-scale forest gaps.

Author Contributions

Conceptualization, M.L. (Mingshi Li); methodology, M.L. (Mingshi Li) and M.L. (Muxuan Li); software, M.L. (Muxuan Li); validation, M.L. (Muxuan Li); formal analysis, M.L. (Muxuan Li); investigation, M.L. (Muxuan Li); resources, M.L. (Muxuan Li); data curation, M.L. (Muxuan Li); writing—original draft preparation, M.L. (Muxuan Li); writing—review and editing, M.L. (Mingshi Li) and M.L. (Muxuan Li); visualization, M.L. (Muxuan Li); supervision, M.L. (Mingshi Li); project administration, M.L. (Mingshi Li); funding acquisition, M.L. (Mingshi Li). All authors have read and agreed to the published version of the manuscript.

Funding

This research was jointly funded by the Forestry Science and Technology Innovation and Promotion Project Sponsored by Jiangsu Province (LYKJ(2022)02), the National Natural Science Foundation of China (grant No. 31971577) and the Priority Academic Program Development (PAPD) of Jiangsu Higher Education Institutions.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Xia, B.; Deng, F.; He, S. Research Progress in Forest Gaps. J. Plant Resour. Environ. 1997, 6, 51–58. [Google Scholar]
  2. Hui, T.; Jiaojun, Z.; Hongzhang, K.; Liyue, H. A Research Review on Forest Gap Disturbance. Chin. J. Ecol. 2006, 26, 587–594. [Google Scholar]
  3. Getzin, S.; Nuske, R.; Wiegand, K. Using Unmanned Aerial Vehicles (UAV) to Quantify Spatial Gap Patterns in Forests. Remote Sens. 2014, 6, 6988–7004. [Google Scholar] [CrossRef]
  4. Orman, O.; Dobrowolska, D. Gap dynamics in the Western Carpathian mixed beech old-growth forests affected by spruce bark beetle outbreak. Eur. J. For. Res. 2017, 136, 571–581. [Google Scholar] [CrossRef]
  5. Muscolo, A.; Bagnato, S.; Sidari, M.; Mercurio, R. A review of the roles of forest canopy gaps. J. For. Res. 2014, 25, 725–736. [Google Scholar]
  6. Liu, B.; Zhao, P.; Zhou, M.; Wang, Y.; Yang, L.; Shu, Y. Effects of Forest Gaps on the Regeneration Pattern of the Undergrowth of Secondary Popular-Birch Forests in Southern Greater Xing’an Mountains. For. Resour. Manag. 2019, 31–36. [Google Scholar] [CrossRef]
  7. Runkle, J.R. Patterns of Disturbance in Some Old-Growth Mesic Forests of Eastern North America. Ecology 1982, 5, 1533–1546. [Google Scholar] [CrossRef]
  8. Zhu, J.J.; Zhang, G.Q.; Wang, G.G.; Yan, Q.L.; Lu, D.L.; Li, X.F.; Zheng, X. On the size of forest gaps: Can their lower and upper limits be objectively defined? Agr. For. Meteorol. 2015, 213, 64–76. [Google Scholar] [CrossRef]
  9. Watt, A.S. Pattern and process in the plant community. J. Ecol. 1947, 3, 1–22. [Google Scholar] [CrossRef]
  10. Runkle, J.R. Gap regeneration in some old-growth forests of the eastern United States. Ecology 1981, 4, 1041–1051. [Google Scholar] [CrossRef]
  11. Dietmaier, A.; McDermid, G.J.; Rahman, M.M.; Linke, J.; Ludwig, R. Comparison of LiDAR and Digital Aerial Photogrammetry for Characterizing Canopy Openings in the Boreal Forest of Northern Alberta. Remote Sens. 2019, 11, 1919. [Google Scholar] [CrossRef]
  12. Seidel, D.; Ammer, C.; Puettmann, K. Describing forest canopy gaps efficiently, accurately, and objectively: New prospects through the use of terrestrial laser scanning. Agric. For. Meteorol. 2015, 213, 23–32. [Google Scholar]
  13. Hu, L.; Zhu, J. Determination of the tridimensional shape of canopy gaps using two hemispherical photographs. Agric. For. Meteorol. 2009, 149, 862–872. [Google Scholar] [CrossRef]
  14. Liang, X.; Ye, W. Advances in Forest Gap Research (Review). J. Trop. Subtrop. Bot. 2001, 9, 355–364. [Google Scholar]
  15. Li, Y.; Zhang, G. Forest Gap Definition and Forest Gap Characteristics Measurement Methods. World For. Res. 2021, 34, 58–63. [Google Scholar]
  16. White, J.C.; Tompalski, P.; Coops, N.C.; Wulder, M.A. Comparison of airborne laser scanning and digital stereo imagery for characterizing forest canopy gaps in coastal temperate rainforests. Remote Sens. Environ. 2018, 208, 1–14. [Google Scholar]
  17. Du, Z.; Zheng, G.; Shen, G.; Moskal, L.M. Characterizing spatiotemporal variations of forest canopy gaps using aerial laser scanning data. Int. J. Appl. Earth Obs. 2021, 104, 102588. [Google Scholar] [CrossRef]
  18. Garbarino, M.; Mondino, E.B.; Lingua, E.; Nagel, T.A.; Dukic, V.; Govedar, Z.; Motta, R. Gap disturbances and regeneration patterns in a Bosnian old-growth forest: A multispectral remote sensing and ground-based approach. Ann. For. Sci. 2012, 69, 617–625. [Google Scholar] [CrossRef]
  19. Jayathunga, S.; Owari, T.; Tsuyuki, S. Evaluating the Performance of Photogrammetric Products Using Fixed-Wing UAV Imagery over a Mixed Conifer-Broadleaf Forest: Comparison with Airborne Laser Scanning. Remote Sens. 2018, 10, 187. [Google Scholar] [CrossRef]
  20. Vehmas, M.; Packalén, P.; Maltamo, M.; Eerikäinen, K. Using airborne laser scanning data for detecting canopy gaps and their understory type in mature boreal forest. Ann. For. Sci. 2011, 68, 825–835. [Google Scholar] [CrossRef]
  21. Sun, S.; Zhu, B.; Jing, L.; Chang, J.; Hu, L.; Song, L.; Chang, Q.; Sun, Y. Effects of Forest Gaps Disturbance on Regeneration Pattern of Pinus sylvestris var. mongolica Plantations. J. Northeast For. Univ. 2022, 50, 6–10. [Google Scholar]
  22. He, X.Y.; Ren, C.Y.; Chen, L.; Wang, Z.; Zheng, H. The Progress of Forest Ecosystems Monitoring with Remote Sensing Techniques. Sci. Geogr. Sin. 2018, 38, 997–1011. [Google Scholar]
  23. Bonnet, S.; Gaulton, R.; Lehaire, F.; Lejeune, P. Canopy Gap Mapping from Airborne Laser Scanning: An Assessment of the Positional and Geometrical Accuracy. Remote Sens. 2015, 7, 11267–11294. [Google Scholar] [CrossRef]
  24. Vepakomma, U.; St-Onge, B.; Kneeshaw, D. Spatially explicit characterization of boreal forest gap dynamics using multi-temporal lidar data. Remote Sens. Environ. 2008, 112, 2326–2340. [Google Scholar]
  25. Qi, Z.; Li, S.; Yue, W.; Liu, Q.; Li, Z. Forest gap identification in natural forest based on UAV LiDAR. J. Beijing For. Univ. 2022, 44, 44–53. [Google Scholar]
  26. Yun, Z.; Zheng, G.; Geng, Q.; Monika Moskal, L.; Wu, B.; Gong, P. Dynamic stratification for vertical forest structure using aerial laser scanning over multiple spatial scales. Int. J. Appl. Earth Obs. 2022, 114, 103040. [Google Scholar]
  27. Mao, X.; Hou, J.; Fan, W. Object-Based Automatic Recognition for Forest Gaps Using Aerial lmage and LiDAR Data. Sci. Silvae Sin. 2017, 53, 94–103. [Google Scholar]
  28. Schliemann, S.A.; Bockheim, J.G. Methods for studying treefall gaps: A review. For. Ecol. Manag. 2011, 7, 1143–1151. [Google Scholar]
  29. Lawton, R.O.; Putz, F.E. Natural disturbance and gap-phase regeneration in a wind-exposed tropical cloud forest. Ecology 1988, 3, 764–777. [Google Scholar] [CrossRef]
  30. Lorimer, C.G. Relative effects of small and large disturbances on temperate hardwood forest structure. Ecology 1989, 3, 565–567. [Google Scholar] [CrossRef]
  31. Yamamoto, S.I. Gap characteristics and gap regeneration in primary evergreen broad-leaved forest of western Japan. J. Plant Res. 1992, 1, 29–45. [Google Scholar] [CrossRef]
  32. Barton, I.; Kiraly, G.; Czimber, K.; Hollaus, M.; Pfeifer, N. Treefall Gap Mapping Using Sentinel-2 Images. Forests 2017, 8, 426. [Google Scholar] [CrossRef]
  33. Zhu, C.Y.; Zhu, J.J.; Wang, G.G.; Zheng, X.; Lu, D.L.; Gao, T. Dynamics of gaps and large openings in a secondary forest of Northeast China over 50 years. Ann. For. Sci. 2019, 76, 72. [Google Scholar] [CrossRef]
  34. Yang, J.; Jones, T.; Caspersen, J.; He, Y. Object-Based Canopy Gap Segmentation and Classification: Quantifying the Pros and Cons of Integrating Optical and LiDAR Data. Remote Sens. 2015, 7, 15917–15932. [Google Scholar] [CrossRef]
  35. Xia, J.; Wang, Y.; Dong, P.; He, S.; Zhao, F.; Luan, G. Object-Oriented Canopy Gap Extraction from UAV Images Based on Edge Enhancement. Remote Sens. 2022, 14, 4762. [Google Scholar] [CrossRef]
  36. Cagliero, E.; Morresi, D.; Marchi, N.; Paradis, L.; Finsinger, W.; Garbarino, M.; Lingua, E. Geomatics and Geospatial Technologies. Commun. Comput. Inf. Sci. 2022, 1507, 15–27. [Google Scholar]
  37. Lin, X.; Xu, M.; Cao, C.; Dang, Y.; Bashir, B.; Xie, B.; Huang, Z. Estimates of Forest Canopy Height Using a Combination of ICESat-2/ATLAS Data and Stereo-Photogrammetry. Remote Sens. 2020, 12, 3649. [Google Scholar] [CrossRef]
  38. Ghosh, S.M.; Behera, M.D.; Paramanik, S. Canopy Height Estimation Using Sentinel Series Images through Machine Learning Models in a Mangrove Forest. Remote Sens. 2020, 12, 1519. [Google Scholar] [CrossRef]
  39. Haotian, D. Research on Forest Parameter Inversion Method in Northeastern China Based on Optical and SAR Multi-Source Remote Sensing Data. Master’s Thesis, Jiling University, Changchun, China, 2022. [Google Scholar]
  40. Luo, Y.; Qi, S.; Liao, K.; Zhang, S.; Hu, B.; Tian, Y. Mapping the Forest Height by Fusion of ICESat-2 and Multi-Source Remote Sensing Imagery and Topographic Information: A Case Study in Jiangxi Province, China. Forests 2023, 14, 454. [Google Scholar] [CrossRef]
  41. Xi, Z.; Xu, H.; Xing, Y.; Gong, W.; Chen, G.; Yang, S. Forest Canopy Height Mapping by Synergizing ICESat-2, Sentinel-1, Sentinel-2 and Topographic Information Based on Machine Learning Methods. Remote Sens. 2022, 14, 364. [Google Scholar] [CrossRef]
  42. Cougo, M.; Souza-Filho, P.; Silva, A.; Fernandes, M.; Santos, J.; Abreu, M.; Nascimento, W.; Simard, M. Radarsat-2 Backscattering for the Modeling of Biophysical Parameters of Regenerating Mangrove Forests. Remote Sens. 2015, 7, 17097–17112. [Google Scholar] [CrossRef]
  43. Liu, Y.; Gong, W.; Xing, Y.; Hu, X.; Gong, J. Estimation of the forest stand mean height and aboveground biomass in Northeast China using SAR Sentinel-1B, multispectral Sentinel-2A, and DEM imagery. ISPRS J. Photogramm. 2019, 151, 277–289. [Google Scholar]
  44. Fu, W.X.; Guo, H.D.; Xie, C.; Lu, Y.C.; Li, X.W. Forest Height Inversion Using Dual-pol Polarimetric SAR Interferometry. IOP Conf. Ser. Earth Environ. Sci. 2014, 17, 12072. [Google Scholar] [CrossRef]
  45. Nedkov, R. Orthogonal Transformation of Segmented Images from the Satellite Sentinel-2. C. R.s L’académie Bulg. Sci. Sci. Math. Nat. 2017, 70, 687–692. [Google Scholar]
  46. He, T.; Wang, S. Multi-spectral remote sensing land-cover classification based on deep learning methods. J. Supercomput. 2021, 77, 2829–2843. [Google Scholar] [CrossRef]
  47. Li, Q.; Tian, J.; Tian, Q. Deep Learning Application for Crop Classification via Multi-Temporal Remote Sensing Images. Agriculture 2023, 13, 906. [Google Scholar]
  48. Lakshmi, T.R.V.; Reddy, C.V.K.; Kora, P.; Swaraja, K.; Meenakshi, K.; Kumari, C.U.; Reddy, L.P. Classification of multi-spectral data with fine-tuning variants of representative models. Multimed. Tools Appl. 2023. [Google Scholar] [CrossRef]
  49. Kotaridis, I.; Lazaridou, M. Cnns in land cover mapping with remote sensing imagery: A review and meta-analysis. Int. J. Remote Sens. 2023, 44, 5896–5935. [Google Scholar]
  50. Kattenborn, T.; Leitloff, J.; Schiefer, F.; Hinz, S. Review on Convolutional Neural Networks (CNN) in vegetation remote sensing. ISPRS J. Photogramm. 2021, 173, 24–49. [Google Scholar]
  51. Boston, T.; Van Dijk, A.; Thackway, R. Convolutional Neural Network Shows Greater Spatial and Temporal Stability in Multi-Annual Land Cover Mapping Than Pixel-Based Methods. Remote Sens. 2023, 15, 2132. [Google Scholar]
  52. Lassalle, G.; Souza Filho, C.R.; Disney, M.; Friess, D. Tracking canopy gaps in mangroves remotely using deep learning. Remote Sens. Ecol. Con. 2022, 8, 890–903. [Google Scholar] [CrossRef]
  53. Duvall, A. High Resolution Topography near Santa Cruz; OpenTopography: San Diego, CA, USA, 2017. [Google Scholar]
  54. Yamamoto, S.I. The gap theory in forest dynamics. Bot. Mag. 1992, 105, 375–383. [Google Scholar] [CrossRef]
  55. Zang, R.; Xu, H. Advances in Forest Gap Disturbance Research. Sci. Silvae Sin. 1998, 34, 90–98. [Google Scholar]
  56. Tyrrell, L.E.; Crow, T.R. Structural characteristics of old-growth hemlock-hardwood forests in relation to age. Ecology 1994, 2, 370–386. [Google Scholar] [CrossRef]
  57. Liu, H.C.; He, G.J. Shape feature extraction of high resolution remote sensing image based on SUSAN and moment invariant. In Proceedings of the 2008 Congress on Image and Signal Processing, Sanya, China, 27–30 May 2008; Li, D., Deng, G., Eds.; IEEE: New York, NY, USA, 2008; Volume 2, pp. 801–807. [Google Scholar]
  58. Liangyuan, H.; Fulong, L.; Xiaojie, L.; Wenda, Z.; Yachao, S. Prediction model of sintered ore drum index based on deep forest algorithm. Metall. Ind. Autom. 2022, 46, 78–85. [Google Scholar]
  59. Zhou, Z.; Feng, J. Deep Forest. Natl. Sci. Rev. 2019, 6, 74–86. [Google Scholar]
  60. Lecun, Y.; Member, I.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-Based Learning Applied to Document Recognition. Proc. IEEE 1998, 11, 2278–2324. [Google Scholar] [CrossRef]
  61. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going Deeper with Convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015. [Google Scholar]
  62. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. Acm 2017, 60, 84–90. [Google Scholar] [CrossRef]
  63. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-scale Image Recognition. In Proceedings of the 3rd International Conference on Learning Representations, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  64. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 7553, 436–444. [Google Scholar] [CrossRef]
  65. Zhou, F.; Jin, L.; Dong, J. Review of Convolutional Neural Network. Chin. J. Comput. 2017, 06, 1229–1251. [Google Scholar]
  66. Jin, H. An Analytical Study based on Pixel-based and Object-oriented Classification Methods. Jiangsu Sci. Technol. Inf. 2014, 31, 38–40. [Google Scholar]
  67. Hobi, M.L.; Ginzler, C.; Commarmot, B.; Bugmann, H. Gap pattern of the largest primeval beech forest of Europe revealed by remote sensing. Ecosphere 2015, 6, 1–15. [Google Scholar] [CrossRef]
  68. Healey, S.P.; Cohen, W.B.; Yang, Z.Q.; Krankina, O.N. Comparison of Tasseled Cap-based Landsat data structures for use in forest disturbance detection. Remote Sens. Environ. 2005, 97, 301–310. [Google Scholar] [CrossRef]
  69. Guild, L.S.; Cohen, W.B.; Kauffman, J.B. Detection of deforestation and land conversion in Rondônia, Brazil using change detection techniques. Int. J. Remote Sens. 2010, 25, 731–750. [Google Scholar] [CrossRef]
  70. Min, X. Advances in Study on Vegetation Indeices. Adv. Earth Sci. 1998, 13, 10–16. [Google Scholar]
  71. Gaulton, R.; Malthus, T.J. LiDAR mapping of canopy gaps in continuous cover forests: A comparison of canopy height model and point cloud based techniques. Int. J. Remote Sens. 2010, 31, 1193–1211. [Google Scholar] [CrossRef]
  72. Zhang, X.S.; Zhuang, Y.; Yan, F.; Wang, W. Status and Development of Transfer Learning Based Category-Level Object Recognition and Detection. Acta Autom. Sin. 2019, 45, 1224–1243. [Google Scholar]
  73. Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. J. Mach. Learn Res. 2014, 15, 1929–1958. [Google Scholar]
  74. Utkin, L.V.; Ryabinin, M.A. A Deep Forest for Transductive Transfer Learning by Using a Consensus Measure. In Proceedings of the Conference on Artificial Intelligence and Natural Language, St. Petersburg, Russia, 20–23 September 2017. [Google Scholar]
  75. Jagannathan, J.; Divya, C. Deep learning for the prediction and classification of land use and land cover changes using deep convolutional neural network. Ecol. Inf. 2021, 65, 101412. [Google Scholar] [CrossRef]
  76. Lang, N.; Schindler, K.; Wegner, J.D. Country-wide high-resolution vegetation height mapping with Sentinel-2. Remote Sens. Environ. 2019, 233, 111347. [Google Scholar] [CrossRef]
  77. Zhu, Y.; Li, Q.; Lv, Z.; Falco, N. Novel Land Cover Change Detection Deep Learning Framework with Very Small Initial Samples Using Heterogeneous Remote Sensing Images. Remote Sens. 2023, 15, 4609. [Google Scholar] [CrossRef]
  78. Kenderes, K.; Mihok, B.; Standovar, T. Thirty years of gap dynamics in a central european beech forest reserve. Forestry 2008, 81, 111–123. [Google Scholar] [CrossRef]
  79. Dobrowolska, D.; Piasecka, Ż.; Kuberski, A.; Stereńczak, K. Canopy gap characteristics and regeneration patterns in the Białowieża forest based on remote sensing data and field measurements. For. Ecol. Manag. 2022, 511, 120123. [Google Scholar] [CrossRef]
  80. Himes, J.M.; Rentch, J.S. Canopy Gap Dynamics in a Second-Growth Appalachian Hardwood Forest in West Virginia. Castanea 2013, 78, 171–184. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Location of the study area and the corresponding demonstrated examples of active and passive remote sensing datasets involved in the analysis. (Insets (ac) are the locations of BC_WA, SW_OR, and SC_CA sites, labeled by a red pentagram, respectively, insets (df) are the corresponding true-color composites of the Sentinel-2 multi-spectral data for BC_WA, SW_OR, and SC_CA sites, inset (g) is the LiDAR-derived CHM data for SC_CA site).
Figure 1. Location of the study area and the corresponding demonstrated examples of active and passive remote sensing datasets involved in the analysis. (Insets (ac) are the locations of BC_WA, SW_OR, and SC_CA sites, labeled by a red pentagram, respectively, insets (df) are the corresponding true-color composites of the Sentinel-2 multi-spectral data for BC_WA, SW_OR, and SC_CA sites, inset (g) is the LiDAR-derived CHM data for SC_CA site).
Forests 14 02146 g001
Figure 2. The pre-processing workflow of the Sentinel-1 SLC and GRD data.
Figure 2. The pre-processing workflow of the Sentinel-1 SLC and GRD data.
Forests 14 02146 g002
Figure 3. The framework of the DF model implemented in the analysis. (Inset (a) shows the structure of multi-granularity scanning module, inset (b) shows the structure of cascade forest).
Figure 3. The framework of the DF model implemented in the analysis. (Inset (a) shows the structure of multi-granularity scanning module, inset (b) shows the structure of cascade forest).
Forests 14 02146 g003
Figure 4. The framework of the 2D-CNN model implemented in the analysis.
Figure 4. The framework of the 2D-CNN model implemented in the analysis.
Forests 14 02146 g004
Figure 5. The workflow chart of forest gap extraction in this analysis. (The red and green dotted lines delineate the variables used to build the canopy height inversion model and the forest gap classification model, respectively).
Figure 5. The workflow chart of forest gap extraction in this analysis. (The red and green dotted lines delineate the variables used to build the canopy height inversion model and the forest gap classification model, respectively).
Forests 14 02146 g005
Figure 6. Feature importance ranking for building the canopy height inversion model. (Left for the mean squared error ranking, right for the node purity ranking).
Figure 6. Feature importance ranking for building the canopy height inversion model. (Left for the mean squared error ranking, right for the node purity ranking).
Forests 14 02146 g006
Figure 7. The result of feature importance ranking for building the forest gap extraction models. (Left for the mean squared error ranking, right for the node purity ranking).
Figure 7. The result of feature importance ranking for building the forest gap extraction models. (Left for the mean squared error ranking, right for the node purity ranking).
Forests 14 02146 g007
Figure 8. Comparison of the CHM data and the HI data. (Insets (a,b) are the CHM data and HI data of SC_CA site, respectively).
Figure 8. Comparison of the CHM data and the HI data. (Insets (a,b) are the CHM data and HI data of SC_CA site, respectively).
Forests 14 02146 g008
Figure 9. Non-forest canopy area extracted from both the CHM and HI data in the SC_CA site.
Figure 9. Non-forest canopy area extracted from both the CHM and HI data in the SC_CA site.
Forests 14 02146 g009
Figure 10. Partial sample examples from the SC_CA site. (Insets (a,d) are the true-color images of the Sentinel-2 data, insets (b,e) are the samples produced from the CHM data in the corresponding regions, insets (c,f) are the samples produced from the HI data in the corresponding regions).
Figure 10. Partial sample examples from the SC_CA site. (Insets (a,d) are the true-color images of the Sentinel-2 data, insets (b,e) are the samples produced from the CHM data in the corresponding regions, insets (c,f) are the samples produced from the HI data in the corresponding regions).
Forests 14 02146 g010
Figure 11. Forest gap and non-such classification results in the SC_CA site (Insets (ad) are the classification results of CNN-HI, CNN-CHM, DF-HI and DF-CHM model, respectively, insets (eh) show the magnified details of the classification results for each model, inset (i) is the corresponding true-color Google Earth Map, the green boxes in insets (ad) correspond to the boundaries of insets (eh) respectively).
Figure 11. Forest gap and non-such classification results in the SC_CA site (Insets (ad) are the classification results of CNN-HI, CNN-CHM, DF-HI and DF-CHM model, respectively, insets (eh) show the magnified details of the classification results for each model, inset (i) is the corresponding true-color Google Earth Map, the green boxes in insets (ad) correspond to the boundaries of insets (eh) respectively).
Forests 14 02146 g011
Figure 12. The forest gap and non-such classification results in the BC_WA site. (Insets (ad) are the classification results of CNN-HI model, CNN-CHM model, DF-HI model and DF-CHM model, respectively, insets (fi) show the details of the classification results for each model, inset (e) is the corresponding true-color Google Earth Map, the green boxes in insets (ad) correspond to the boundaries of insets (fi) respectively).
Figure 12. The forest gap and non-such classification results in the BC_WA site. (Insets (ad) are the classification results of CNN-HI model, CNN-CHM model, DF-HI model and DF-CHM model, respectively, insets (fi) show the details of the classification results for each model, inset (e) is the corresponding true-color Google Earth Map, the green boxes in insets (ad) correspond to the boundaries of insets (fi) respectively).
Forests 14 02146 g012
Figure 13. The forest gap and non-such classification results of SW_OR site. (Insets (ad) are the classification results of CNN-HI model, CNN-CHM model, DF-HI model and DF-CHM model, respectively, insets (eh) show the details of the classification results for each model, inset (i) is the corresponding true-color Google Earth Map, the green boxes in insets (ad) correspond to the boundaries of insets (eh) respectively).
Figure 13. The forest gap and non-such classification results of SW_OR site. (Insets (ad) are the classification results of CNN-HI model, CNN-CHM model, DF-HI model and DF-CHM model, respectively, insets (eh) show the details of the classification results for each model, inset (i) is the corresponding true-color Google Earth Map, the green boxes in insets (ad) correspond to the boundaries of insets (eh) respectively).
Forests 14 02146 g013
Table 1. Description of the Sentinel-1 data used in the current work.
Table 1. Description of the Sentinel-1 data used in the current work.
Acquisition DateMission IdentifierProduct TypePolarizationOrbit NumberProduct Unique ID
4 October 2017S1ASLCVV + VH0186640503
16 October 2017S1ASLCVV + VH01883921A5
10 October 2017S1BGRDVV + VH0077617170
Table 2. Description of the involved Sentinel-2 data in the current work.
Table 2. Description of the involved Sentinel-2 data in the current work.
Acquisition DateMission IdentifierProduct LevelBaseline&
Orbit Number
Training/Validation Set
&Corresponding Site
30 August 2020S2AL2AN0214_R013Validation Set (BC_WA)
4 September 2021S2AL2AN0301_R056Validation Set (SW_OR)
12 October 2017S2BL2AN0205_R113Training Set (SC_CA)
Table 3. Description of the Google Earth Maps used in the current work.
Table 3. Description of the Google Earth Maps used in the current work.
Acquisition DateSpatial ResolutionCorresponding Site
1 September 20201.5 mBC_WA
6 September 20211.0 mSW_OR
30 September 20171.5 mSC_CA
Table 4. Description of the ALOS DEM data used in the current work.
Table 4. Description of the ALOS DEM data used in the current work.
Acquisition DateAbsolute Orbit NumberCorresponding Site
27 April 201022,671BC_WA
28 November 200815,166SW_OR
28 August 20078478SC_CA
Table 5. Comparison of the HI and CHM data statistics in the SC_CA site.
Table 5. Comparison of the HI and CHM data statistics in the SC_CA site.
CHMHI
Min. (m)0.000.23
Max. (m)76.6271.29
Mean (m)32.6632.69
Standard deviation (m)12.958.25
Table 6. Comparison of the HI and CHM derived samples in the SC_CA site.
Table 6. Comparison of the HI and CHM derived samples in the SC_CA site.
ClassCHM-Identified PixelsHI-Identified PixelsSpatially Collocated Pixels
Forest canopy281,056283,741280,370
Forest gap237015031301
Non-such599941813697
Table 7. The validation statistics of forest gap classifications from different models in the SC_CA site.
Table 7. The validation statistics of forest gap classifications from different models in the SC_CA site.
CNN_HICNN_CHMDF_HIDF_CHM
OA0.850.810.800.91
Kappa coefficient0.780.710.690.86
UA0.710.580.550.84
PA0.890.860.940.94
Table 8. The validation statistics of forest gap classification derived from all the four models.
Table 8. The validation statistics of forest gap classification derived from all the four models.
DF-CHMDF-HI
SW_ORBC_WASW_ORBC_WA
OA0.550.620.420.79
Kappa coefficient0.140.410.160.61
UA0.110.240.060.56
PA0.320.480.260.76
CNN-CHMCNN-HI
SW_ORBC_WASW_ORBC_WA
OA0.730.690.850.87
Kappa coefficient0.590.540.780.81
UA0.560.580.820.84
PA0.650.660.740.78
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, M.; Li, M. Forest Gap Extraction Based on Convolutional Neural Networks and Sentinel-2 Images. Forests 2023, 14, 2146. https://doi.org/10.3390/f14112146

AMA Style

Li M, Li M. Forest Gap Extraction Based on Convolutional Neural Networks and Sentinel-2 Images. Forests. 2023; 14(11):2146. https://doi.org/10.3390/f14112146

Chicago/Turabian Style

Li, Muxuan, and Mingshi Li. 2023. "Forest Gap Extraction Based on Convolutional Neural Networks and Sentinel-2 Images" Forests 14, no. 11: 2146. https://doi.org/10.3390/f14112146

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop