Next Article in Journal
Curvature Weighted Decimation: A Novel, Curvature-Based Approach to Improved Lidar Point Decimation of Terrain Surfaces
Previous Article in Journal
Automating the Management of 300 Years of Ocean Mapping Effort in Order to Improve the Production of Nautical Cartography and Bathymetric Products: Shom’s Téthys Workflow
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Feature Extraction and Classification of Canopy Gaps Using GLCM- and MLBP-Based Rotation-Invariant Feature Descriptors Derived from WorldView-3 Imagery

1
School of Geography, Archaeology and Environmental Studies, University of the Witwatersrand, Johannesburg 2050, South Africa
2
Sibanye-Stillwater Digital Mining Laboratory, Wits Mining Institute, University of the Witwatersrand, Johannesburg 2050, South Africa
*
Author to whom correspondence should be addressed.
Geomatics 2023, 3(1), 250-265; https://doi.org/10.3390/geomatics3010014
Submission received: 11 December 2022 / Revised: 14 March 2023 / Accepted: 15 March 2023 / Published: 16 March 2023

Abstract

:
Accurate mapping of selective logging (SL) serves as the foundation for additional research on forest restoration and regeneration, species diversification and distribution, and ecosystem dynamics, among other applications. This study aimed to model canopy gaps created by illegal logging of Ocotea usambarensis in Mt. Kenya Forest Reserve (MKFR). A texture-spectral analysis approach was applied to exploit the potential of WorldView-3 (WV-3) multispectral imagery. First, texture properties were explored in the sub-band images using fused grey-level co-occurrence matrix (GLCM)- and local binary pattern (LBP)-based texture feature extraction. Second, the texture features were fused with colour using the multivariate local binary pattern (MLBP) model. The G-statistic and Euclidean distance similarity measures were applied to increase accuracy. The random forest (RF) and support vector machine (SVM) were used to identify and classify distinctive features in the texture and spectral domains of the WV-3 dataset. The variable importance measurement in RF ranked the relative influence of sets of variables in the classification models. Overall accuracy (OA) scores for the respective MLBP models were in the range of 80–95.1%. The respective user’s accuracy (UA) and producer’s accuracy (PA) for the univariate LBP and MLBP models were in the range of 67–75% and 77–100%, respectively.

1. Introduction

Tropical forests are c. 7% (c. 2 billion ha) of the earth’s terrestrial environment, and house about half of all biodiversity; tropical forests serve various economic, social, and environmental functions [1]. However, most conservation initiatives have not been successful because in many regions tropical forests are being cleared for timber and expansion of agricultural land [2]. Unsustainable selective logging (SL) is probably the single biggest factor contributing to the global degradation of tropical forests [3]. Selective logging (SL) reduces forest density when the sparsely distributed, and most valuable trees are cut, creating canopy gaps, without necessarily displaying any logging infrastructure [4,5]. Previous studies on the estimation of deforestation rates in tropical forests generally ignored the effects of SL [6]. However, recently researchers have emphasised the contribution of illegal and SL to the rates of deforestation [7]. Kenya’s tropical forest cover is mainly composed of montane forests. For decades, the MKFR (Mt. Kenya Forest Reserve) has been subjected to illegal logging for its commercially valuable reserves of indigenous timber, especially the endangered Ocotea usambarensis—a hardwood tree sought for its excellent decay and insect resistance [8,9]. The O. usambarensis has large bole diameters between 3.75 and 9.5 m—seeds are produced every 10 years but germination is intermittent, and it takes a long time to reach maturity, i.e., c. 60 to 70 years [10].
Because SL has an impact on biodiversity, the ecosystem services they provide, microclimate, and carbon pools, tracking this activity is crucial in tropical forests [11]. Canopy gaps are usually small (<1000 m2) [12]. In the past, the gap’s size was determined by ground-based techniques [13]. Field surveys can be challenging, especially in rough terrain—they are prone to error. Furthermore, the results of ground surveys can be subjective, e.g., Nakashizuka et al. [14]. Additionally, in field surveys, it can be challenging to distinguish fine canopy gaps and the ones where the understory vegetation is dense. The most appropriate solution was to observe forests from above rather than below through remote sensing (RS) [15]. Many methods applied to map SL in tropical forests used low/medium spatial resolution datasets that have a high rate of false detections [16]. The detectability of the effects of SL on medium spatial resolution images is 1 to 3 years [17]. Therefore, the amount of forest degradation that is not detected using low/medium spatial resolution datasets is unknown [16]. Recently, very high-resolution (VHR) RS datasets, i.e., <1 m per pixel, have caught the interest of researchers studying SL in tropical forests [11,18,19,20,21,22]. Satellite and airborne data with VHR are appropriate for precisely delineating forest canopy gaps, as well as individual tree crowns [11]. Accurate quantification of canopy gaps from disappearing tree crowns has a crucial contribution in calculating carbon densities of forests, as well as modelling the effects of forest degradation on tropical biodiversity. Currently, to compute carbon densities in forests, none of the algorithms used account for canopy gaps. The accuracy of carbon estimates can be improved, provided canopy gaps are accurately identified.
The spectral information in multispectral RS data is limited—the textural features of a RS dataset can reveal the spatial correlation among pixels to detect change in the structure of vegetation [23,24]. Therefore, unlike pixel-based techniques, texture-based classification techniques considered how a pixel related to its neighbourhood [25,26]. The application of texture analysis in RS studies reported great achievements [27]. Texture in images is the change in the frequency in the tone of pixels [27]. In RS, different approaches have been applied in extracting textural features from images [26,28]. Originally by Haralick et al. [29], texture measures, e.g., statistical metrics, are extractable from the grey-level co-occurrence matrix (GLCM). Model-based approaches classify textured images according to probability distributions in random fields, e.g., Cross and Jain [30], and the local linear transformations, e.g., Unser [31]. He and Wang [32] relied on the texture spectrum. These approaches have limited application because of their computational and time complexities [25,33]. Their spatial analysis is mostly applied to small neighbourhoods, on a single scale [33]. This difficulty has been solved through the development of multichannel-based image analysis [26,28,33]. A textured image is normally reduced into characteristic feature images by application of, e.g., wavelet, Gabor, or neural network-based filters [26,28]. Thus, with just a few feature statistics, a high-dimensional textural pattern can be modelled [26,28]. Among the texture models, variants of the local binary pattern (LBP) such as the multivariate local binary pattern (MLBP) [27], and the multivariate advanced local binary pattern (MALBP) [26] are computationally convenient for RS images [27]. The LBP model was developed by Ojala et al. [34] for grey-level images. Very high-resolution RS data and GLCM analysis have been successfully used in mapping tropical forests [35,36,37,38].
The visual interpretation of VHR multi-date RS data is a good way to detect and quantify gaps in forests with fairly low uncertainty [11]. Nonetheless, spatially precise data for validation are lacking, and automated approaches based on VHR-RS datasets to detect canopy gaps with high precision over extensive areas are lacking [11]. Although the GLCM and LBP models have been applied differently elsewhere, they have not been used to study canopy gaps, especially in montane tropical forests using VHR-RS data. Therefore, this study aimed to use the fused GLCM-/MLBP-based approach to test whether canopy gaps from illegal logging of O. usambarensis in a highly heterogeneous montane tropical forest can be accurately mapped using WorldView-3 (WV-3) dataset. The performance of the basic LBP model and its variant, i.e., the MLBP model, was compared. It also aimed to provide a framework for carrying out similar studies over larger spatial extents in the future. The high-resolution (HR) WorldView-2 (WV-2) and Google Earth were used to provide historical data. To achieve high classification accuracies, the ability to combine/discriminate between samples is crucial [34], therefore, two similarity measures were used, i.e., the G-statistic and Euclidean distance. Due to their excellent performance and clear logic in handling RS data, the random forest (RF) and support vector machine (SVM) were used to classify canopy gaps in the study area.

2. Materials and Methods

2.1. Study Area

The Mt. Kenya Forest Reserve (MKFR) was established in 1932 under the management of the Department of Forest—now known as the Kenya Forest Service—with the primary goal of preserving and developing the forest reserve. This included creating plantations to replace harvested indigenous stands, regulating resource access, and preserving the forest industry [8]. The forest reserve, located in Central Kenya, covers c. 213,083 ha, and spans a range of elevation, slope, and aspect positions [39]. The snow-capped mountain is right on latitude 0°10′ S and longitude 37°20′ E [40]. In 1997, the mountain received the UNESCO World Heritage Site designation [9]. The study covers approximately 264 ha of Chuka Forest in Tharaka Nithi County. Chuka Forest is part of the Mt. Kenya ecosystem and encompasses approximately 21,740 ha (Figure 1).
Altitude and the difference in the amount of rainfall received has resulted in a pronounced vegetational gradient in Mt. Kenya. Mt. Kenya’s lower slopes are characterised by montane forest, including the species Newtonia buchananii, Podocarpus latifolia, Croton megalocarpus, Nuxia congesta, Olea europaea spp. Africana, Juniperus procera, Calodendrum capense, and Ocotea usambarensis. The O. usambarensis also forms in the sub-montane forests on the extremely humid eastern, southern, and south eastern slopes at 1500 to 2500 m [41].

2.2. Acquisition and Pre-Processing of Satellite Data

This study used WorldView-3 (WV-3) multispectral dataset acquired on 15 September 2019 to detect canopy gaps, while WorldView-2 (WV-2) data acquired on 30 January 2014 and historical imagery in Google Earth offered insights into historical reference for logging (Figure 2). The satellite data of MKFR were provided by Swift Geospatial, Pretoria, South Africa. The panchromatic band of the WV-2 was captured with a spatial resolution of 0.46 m—the WV-3 captures at 0.3 m [42,43]. The multispectral images (8 visible-near-infrared; VNIR) of the WV-2 were acquired at 1.84 m [42]. The WV-3 captures at 1.2 m [43]. The WV-3 acquires eight shortwave-infrared (SWIR) bands with a pixel size of 3.7 m, and eight CAVIS (clouds, aerosol, vapour, ice, and snow) bands at 30 m [43].
The ENVI module (ENVI 5.3) FLAASH was used to atmospherically calibrate the images by converting the digital numbers (DN) to the top-of-atmosphere reflectance. The WV-3 data were co-registered with the WV-2 image to be able to match features between the two datasets—this produced an average root mean square error (RMSE) of 3.41 m. Before calculating textural features, the VNIR bands were down-scaled to 0.3 m pixels, with the 1.2 m pixels sub-divided into 16 pixels [38]. The method necessitated the extraction of texture information without the inclusion of uncertainties of pansharpened VNIR bands [44].

2.3. Acquisition of Field Data

A Global Positioning System (eTrex® 20 GPS Receiver; Garmin, Olathe, KS, USA) and false-colour composite (853-RGB) of WV-3 images were used to locate canopy gaps in the field in February 2020. The three bands were among the best-performing bands in Jackson and Adam [45], therefore, they were used in this study’s analysis. Additionally, the 853-RGB is a well-known band combination for analysing vegetation [43]. In the WV-3 image, gaps were partially illuminated/fully illuminated/not illuminated. In the study area, the human-made canopy gaps reflected the same as natural canopy gaps. The canopy gaps were vegetated, i.e., the gaps had low vegetation in them. This was the initial stage of vegetation recovery from disturbance. GPS coordinates of 100 vegetated gaps and 100 shaded gaps per image block were collected and overlaid on the WV-3 image using a geographic information system (GIS—ArcGIS® v. 10.3; ESRI, Redlands, CA, USA). The pixels of the vegetated gaps, as well as those of the shaded gaps were extracted from the WV-3 imagery. The spatial resolution of the WV-3 data enabled the derivation of forest canopies as references—thus 100 samples of tree crowns were extracted per block. The ground reference data were randomly split into 70% and 30%, i.e., as train and test data, respectively.
Appropriate image block sizes were selected to calculate texture features. Regions in large blocks show a mixture of textures, while small blocks may reduce the probability of computing a texture measure [27]. In this study, six non-overlapping subset images (each 1400 × 1400 pixels) were generated from the WV-3 imagery covering the study area. The three classes—vegetated and shaded gaps, and forest canopy—were easily differentiated in the WV-3 imagery (Figure 3).
Pixels were sampled randomly, covering areas close to the class edges and centres—pixels around class edges were vital in aiding the classifier’s edge detection of textural features [38]. To attain high classification accuracy, the pixel size of the reference data corresponding to the texture classes were kept the same, i.e., it consisted of 20 × 20 pixels (Table 1). The same number of reference points for the vegetated and shaded gaps, and forest canopy was collected because data imbalance reduces the accuracy and performance of the classifier [46].
Dimensions of canopy gaps created by the logging of Ocotea trees were measured in the field, including the dripline measurements, maximum length, compass orientation, and maximum breadth [47]. Points directly below the dripline were noted and since it was impossible to cover all of the canopy drip-line, the boundaries were somehow generalised. A map of the gaps was created from the ground data using ArcGIS. The comparison between the measurements gathered in the field with the remote sensing (RS) measurements enabled the evaluation of the accuracy of the delineated canopy gaps.

2.4. Feature Extraction and Selection

Texture analysis is an effective component of classification for higher-resolution (HR) images—it is convenient to use because image segmentation is not needed [48]. A crucial property of texture is the repetitive nature of the pattern(s) in an area [26]. The spectral information in images has been frequently used in interpreting and analysing images; however, images may have an object reflecting differently, and different objects reflecting the same [49]. This affects the accuracy of image analysis. Improvement in the spatial resolution of RS images has contributed to more spatial structures and texture features, which has led to increased classification accuracy.
According to Cohen and Spies [50], texture features drawn from images of HR can be applied in forestry research. Lucieer et al. [27] and Suruliandi and Jenicka [26] noted that significantly high classification accuracies have been achieved using textural information. The texture-spectral analysis approach used in this study evaluated the widely used grey-level co-occurrence matrix (GLCM) texture measures and the multivariate local binary pattern (MLBP)—an extension of the state-of-art texture descriptor Local Binary Patterns (LBP). The GLCMs are theoretically simple and easy to implement and they generate fewer features [51].
The LBP texture model was put forth by Ojala et al. [34]. On a circular radius of R, the LBP operator thresholds pixels in a circular pattern at the value of the centre pixel, in the neighbourhood of P evenly spaced pixels. It is capable of detecting uniform patterns for any angular space quantization and spatial resolution. The LBP were fused with rotation invariant GLCM measures, i.e., homogeneity, contrast, entropy, angular second moment, and correlation, which led to the following features: LBP/HOM, LBP/CON, LBP/ENT, LBP/ASM, and LBP/COR for bands 3 (Green), 5 (Red), and 8 (Near Infrared 2) of the WV-3 image. The values of these features were computed and allocated to the image pixels, thus revealing textural patterns. Therefore, the histogram of the joint LBP and GLCM feature occurrence formed the final texture feature.
The LBP operator describes the texture of a single band. To improve classification accuracy, Lucieer et al. [27] applied the LBP texture measure to colour images by proposing a multivariate texture model, i.e., the MLBP operator, which describes local pixel relations in three bands [26,27]. Three 3 × 3 matrices describe the local texture in individual bands, while six 3 × 3 matrices compare texture among bands. In the MLBP model, the univariate GLCM measure (e.g., HOM—homogeneity) was extended as multivariate homogeneity (MHOM), i.e., comprising the individual independent homogeneities HOM3, HOM5, and HOM8 representing bands 3 (Green), 5 (Red), and 8 (Near Infrared 2), respectively. The global texture pattern description was derived by combining the MLBP and MHOM in a 2-D histogram. In the 2-D histogram, the x ordinate denotes MLBP and the y ordinate denotes MHOM. In order to incorporate colour into the MLBP model, the same procedure was repeated for the MLBP and the remaining GLCM feature composites, i.e., the respective composites of contrast (MCON), entropy (MENT), angular second moment (MASM), and correlation (MCON).

2.5. Similarity and Separability between Training Signatures

Two measures were used to compare the similarity between training signatures, i.e., the G-statistic and Euclidean distance. The G-statistic is defined as follows [52]:
G = 2 ( [ s , m i = 1 t b f i l o g f i ] [ s , m ( i = 1 t b f i ) l o g ( i = 1 t b f i ) i ] [ i = 1 t b ( s , m f i ) log ( s , m f i ) ] + [ ( s , m i = 1 t b f i ) l o g ( s , m i = 1 t b f i ) ] )
where sample s corresponds to a histogram of the texture measure distribution, while model m corresponds to a histogram of a reference area. tb constitutes the number of bins and fi represents the probability for bin i. The Euclidean distance is calculated as follows [53]:
d ( x , y ) = i = 1 n ( x i y i ) 2  
where x is the vector of the first spectral signature and y is the vector of the second spectral signature. n is the number of image bands.

2.6. Training of Random Forest and Support Vector Machine Classifiers

The random forest (RF) models contain bootstrapped ensembles of decision trees—they can handle independent variables in large numbers while still reporting high classification accuracy [54]. In this research, for the LBP model, the RF classifier was trained using 15 WV-3 metrics for the image blocks as predictors to classify the samples as vegetated gap, shaded gap, or forest canopy. For the MLBP model, 5 WV-3 metrics were used to train the RF classifier. During the training process the learning parameters of the RF classifier (the mtry), the number of predictor variables, and the number of decision trees (ntree) were optimised to obtain the best possible settings. Each tree applies a randomised bagging approach to retrieve a training data subset and utilise it to cross-validate each tree’s result. This enables the RF models to develop an “out-of-bag” (OOB) accuracy and metrics of input in determining the significance of specific variables in the model [54]. The 10-fold cross-validation (CV) technique was used to extract the optimal parameters applied in the training phase of the RF models. The mean decrease accuracy (MDA) and the Mean Decrease in Gini (MDG) indices of variable importance were used [54]. The MDA computes the added error rate related to an input variable’s exemption from a tree while the MDG calculates the reduction in the forest-wide average in node impurities from splits on a variable [55]. The higher the MDA and MDG indices, the more influential the corresponding variables. For a robust selection of features, a combined ranking of both indices was applied in the RF models [55].
The support vector machine (SVM) models apply a supervised binary classifier, able to classify linearly inseparable pixels—support vectors are the samples nearest to the separating hyperplane [55]. The SVM models find support vectors with an optimal margin near the separating hyperplane [56]. Using kernel functions, SVM models use kernel functions to apply decision boundaries that are not linear and introduce gamma (γ) and cost (C) parameters. A similar approach to the RF was used to optimise the SVM parameters—i.e., cost and gamma—to select the optimal pair of the C and γ. The two parameters, respectively, determine the penalty for errors of misclassification and give the curvature weight of the deciding boundary. The radial basis function (RBF) kernel was chosen for this study.
The classification of the texture features using the RF and SVM classifiers involved two phases. Firstly, the classifiers were trained using the respective known samples’ global histograms and their class labels—the two should be consonant with the classifiers’ corresponding pair of classes. Secondly, the unknown samples’ global histograms were the input to the RF and SVM classifiers, which search for the class label of the test sample through a comparison of the respective global histograms of the test sample and the training samples. The RF and SVM classification models used 70% and 30% of the ground data as train and test data, respectively.

2.7. Classification Post-Processing

In the classification post-processing stage, the shaded gap and vegetated gap classes were merged into one class—canopy gaps. Therefore, the final map was composed of two classes only—canopy gaps and forest canopy. Morphological filters were implemented with ArcGIS tools whereby, thin corridors and small spaces amongst forest canopy were removed using the Shrink tool while the Expand tool was applied to enlarge the classified raster gaps. The Focal Statistics tool was used to minimise pixelation and eliminate remaining trees within gaps. The Majority filter retained the most frequently occurring value. Gap polygons with an area < 100 m2 were eliminated using the Select function because only gaps ≥ 100 m2 are significant for carbon dynamics [57], and to rule out small gaps that were not likely caused by felled O. usambarensis trees.

2.8. Measures of Model Performance

Accuracy assessment is used to determine whether pixels identified in the field are classified as they should be [58]. An assessment of the performance of the two ML classifiers was conducted on 30% of the ground data. Confusion matrices consisting of overall accuracy (OA), kappa coefficient (ĸ), producer’s accuracy (PA), and user’s accuracy (UA) were produced and averaged over ten iterations. The OA is calculated by dividing correctly classified pixels by the total number of pixels—typically expressed as a percentage [59]. The PA is the proportion of particular classes on the ground, referred to as such by the classification map [59]. The UA displays the likelihood that a labelled pixel will be placed on the classified map as such [60]. The kappa coefficient (ĸ) represents the difference between the accuracy that was observed and expected. Therefore, the classification accuracies and kappa statistics computed from error matrices were used to evaluate how the classifiers performed.

3. Results

3.1. Similarity and Separability between Training Signatures

The G-statistic and Euclidean distance enabled the avoidance of erroneous assumptions regarding the distribution of features. The G-statistic score is an indication of the possibility that two samples are from the same population. Therefore, a higher score means that the probability of two samples being from the same population is low, and vice versa. Likewise, the Euclidean distance is 0 if two signatures are alike—it is higher for signatures showing little similarity. The results (Table 2) indicated that forest canopy and vegetated gaps had the lowest Euclidean Distance between them with a value of 41. The G-statistic value between newly created gaps and shaded gaps was the highest with 9.01, while the forest canopy and vegetated gaps was the lowest (0.38), followed by forest canopy and newly created gaps. The Euclidean distance separability measure followed a similar pattern. Generally, the texture classes, i.e., shaded gaps, vegetated gaps, and tree crowns, had good separation.

3.2. Optimisation of Random Forest and Support Vector Machine Classifiers

The results of the optimisation of the RF and SVM parameters for the six image blocks are listed in Table 3. The MLBP/MHOM reported the lowest out-of-bag (OOB) error of 0.097 for the RF model with an mtry value of 3 and ntree value of 1500, while the SVM model achieved a cross-validation (CV) error of 0.091 with 0.01 (gamma) and 10 (cost) for image block D. The MLBP/MHOM for image block A also performed fairly well—OOB error of 0.108 with mtry and ntree values of 3 and 1500, respectively. The SVM model attained a CV error of 0.103—gamma and cost values were 0.1 and 1000, respectively, for the same image block. Image block F reported an OOB error of 0.106 (mtry—3 and ntree—5500) while the SVM model achieved a CV error of 0.108 (gamma—1 and cost—100). The MLBP/MASM recorded the highest OOB error of 0.210 for the RF model (the mtry and ntree values were 3 and 3500, respectively) of image block A. The SVM model recorded the highest CV error of 0.214 (the gamma and cost values were 0.1 and 100, respectively) for the MLBP/MASM of image block E.
The average importance score (Figure 4) showed the most important variables—band 3’s LBP/HOM and LBP/CON, the band 5’s LBP/HOM and LBP/CON, and the band 8’s LBP/ENT and LBP/ASM. The least performing variables, which showed the lowest average importance scores were the band 3’s LBP/ENT and LBP/ASM, and the band 5’s LBP/ENT and LBP/ASM.

3.3. Model Performance

The confusion matrices in Table 4 show the results of the RF and SVM classifiers for the MLBP/MHOM, the MLBP/MCON, the MLBP/MENT, the MLBP/MASM, and the MLBP/MCOR models for image block D, which recorded an average overall accuracy of 86.88 ± 5.1% and 89.78 ± 3.7% for RF and SVM classifiers, respectively. Image block E’s RF classification attained an average overall accuracy of 87.00 ± 5.1% while the SVM’s was 87.28 ± 5.8%. The SVM classification of the MLBP/MCON was the highest at 95.1%. The respective univariate LBP measures provided overall accuracies (OAs) in the range of 67–75%.
Overall, the MLBP/MHOM operator achieved the highest OA with 93.5%, 92.9%, 92.4%, 91.4%, 91.0%, and 89.8% for image blocks D, E, F, A, C, and B, respectively, for the RF classifier and 94.3%, 93.1%, 92.6%, 91.8%, 91.2% and 90.8% for image blocks D, C, A, F, E, and B, respectively, for the SVM classifier. Generally, the UA ranged between 78–100% for both RF and SVM classifiers. The PA was in the range of 77–100% for the RF model, and 83–100% for the SVM model. The overall results of the classification of the six image blocks are summarised in Table 5. For each classifier, the table reports the average classification accuracy in the form µ ± σ, where µ is the mean and σ is the standard deviation of the OA.

3.4. Image Classification

A subset of the classification results using MLBP/MHOM for image block D, whose RF and SVM model optimisation parameters recorded some of the lowest OOB and CV errors, respectively, is shown in Figure 5. Even based on visual interpretation, the classes are mapped correctly. The multivariate local binary pattern (MLBP) model distinguishes classes very well because it assigns distinct and precise pattern codes to show patterns. The boundaries of extracted canopy gaps are overlaid on the ground truth canopy gap areas which are shown in red.

4. Discussion

The application of moderate resolution remote sensing (RS) imagery to detect canopy gaps from selective logging (SL) may depict spectral confusion of gaps in forests due to natural disturbances such as windfall. Remote sensing (RS) methods applied on Landsat datasets can only detect selectively logged areas at moderately high intensities, i.e., >20 m3 ha−1; 3–7 trees ha−1. These methods are incapable of quantifying the magnitude and duration of logging damage in regions undergoing lower logging intensities, i.e., <20 m3 ha−1 [61]. Due to the sub-pixel scale of SL gaps, the broad spectral range of Landsat wavelengths cannot detect subtle forest changes. Pansharpened multispectral images and upscaling of spatial resolutions are some of the commonly applied techniques to enhance lower-resolution imagery. In intensive SL analysis, high spatial resolution (5–10 m pixel size) images enable the detection of tree fall gaps log landings, and logging roads. However, for SL where only individual trees are targeted, the application of very high-resolution (VHR) remotely sensed datasets is viable in detecting and mapping disappearing tree crowns.
This study aimed to discover whether grey-level co-occurrence matrix (GLCM)- and multivariate local binary pattern (MLBP)-based rotation-invariant feature descriptors derived from VHR WorldView-3 (WV-3) imagery may be extended and used for canopy gap classification in a tropical sub-montane forest. The study applied a local binary pattern (LBP) model fused with a GLCM model, whereby the rotation invariant LBP operator was used to obtain the LBP images of subsets of images extracted from a WV-3 scene covering the study area. Then, five GLCM measures of the LBP images were calculated to describe the image texture features. The LBP texture measure was applied to colour images by applying a multivariate texture model—the multivariate local binary pattern (MLBP) operator. Due to the robustness of the model, the classes were found to be separable. For the LBP model, a uniformity measure was applied to show the uniformity of the neighbourhood’s pixel values—according to Ojala et al. [34], in a textured image >90% of patterns are uniform.
A circular neighbourhood set comprising 8 neighbouring pixels and a radius of 1 was used—the values for P and R were 8 and 1, respectively. Large P and R values are appropriate for describing large-scale textures, and vice versa [38]. The circular symmetrical neighbour set approach is more robust and delivers more accurate results [51]. According to Clausi [62], different combinations of values of P and R in neighbourhood sets might offer meaningful texture descriptions. Larger window sizes enable classifiers to extract rich textural information from a pixel, which could improve accuracy; however, a larger window size might reduce sensitivity to class edges, and eventually smooth over the image [62].
Previously, very-high-resolution (VHR) earth observation (EO) data have been used to detect SL in tropical forests. For example, Asner et al. [63] used canopy height models (CHMs) from a single LiDAR (light detection and ranging) data acquisition, while Andersen et al. [5] used simple differencing of CHMs to successfully detect disappearing tree crowns. Ellis et al. [18] and Rex et al. [21] used a single date and bi-temporal LiDAR data, respectively, to estimate aboveground biomass (AGB) in selectively logged forests. Dalagnol et al. [11] combined airborne LiDAR and VHR satellite data to quantitatively assess and validate canopy gaps due to tree loss—an average precision of 64% was reported. Baldauf and Köhl [64] applied automated mapping using time-series approaches to detect SL using calibrated SAR (synthetic aperture radar) data. Before the introduction of high-resolution (HR) optical data, the costly traditional aerial photography was used for mapping canopy gaps—technological advances have revitalised its use through unmanned aerial vehicles (UAVs). Spaias et al. [19] used UAV data acquired using a hyperspectral camera to detect canopy gaps in a tropical forest—where cloud-computing resources are lacking the amount of spatial and spectral data acquired may make the data processing computationally demanding. Ota et al. [20] used bi-temporal digital aerial photographs (DAPs) to compute the change in AGB due to logging. Kamarulzaman et al. [22] used UAV data to detect forest canopy gaps from SL. The support vector machine (SVM) and artificial neural network (ANN) classifiers achieved higher overall accuracy of 85% compared to conventional classifiers. However, LiDAR and UAV data cover relatively small spatial extents.
The accuracy results values reported here show that good classification results were obtained. The GLCM features perform better with ≤10 classes—therefore, they can outdo more powerful methods [51]. This research used just five co-occurrence descriptors, although Di Ruberto et al. [65] state that a higher number of co-occurrence features could obtain excellent results. The respective univariate LBP measures provided classification accuracies in the range of 67–75%. The multivariate LBP models gave higher classification accuracies (Table 5). Although the MLBP/MHOM recorded the lowest out-of-bag (OOB) and cross validation (CV) errors, the MLBP/MCON with a CV error of 0.112 (gamma and cost values of 0.1 and 100, respectively) for the SVM model outperformed all the other models to record the highest classification accuracy of 95.1% for image block E. The lowest classification accuracy (80.0%) was recorded by the RF’s MLBP/MASM for image block F. The MLBP/MSAM performed poorly in all RF and SVM models. The overall accuracies (OAs) reported could have even been higher were it not for confusion between the classes.
The fusing of textural and spectral information from three WV-3 bands performed better than their basic models. Future research will aim to modify the model to include more than three bands, even extending it for hyperspectral data. This will explore the contribution of separate colour bands in texture analysis. It will also assist in investigating novel combinations of colour and texture for classification. Although complexity and computational demands would increase, adding more bands might not significantly increase the amount of textural information [27]. However, the net benefit would be increased accuracy in classification, which can be worth it.
Persistent cloud cover in tropical forests presents a major challenge when mapping canopy gaps using optical RS—this is further made worse by the absence of reliable cloud and cloud shadow detection algorithms. This greatly limited the size of the study area.

5. Conclusions

Accurate mapping of canopy gaps is of great importance to forest managers because it guides on-the-ground conservation and restoration projects and management applications. The results reported in this study show that canopy gaps from illegal logging of Ocotea usambarensis have been accurately mapped with high accuracy. The study used an approach that used features integrating both texture and spectral distributions of a very high resolution WorldView-3 dataset—this approach considers the cross band relations. In order to increase classification accuracy, the G-statistic and Euclidean distance measures were used to discriminate between the samples. The framework used in this study could allow forest managers to develop improved methods of mapping canopy gaps at larger spatial extents, using remotely sensed data and very little/no fieldwork—currently, this can be only applied as a guide and cannot be generalised. Future research will aim to find a technique of combining more than three bands of different kinds of remote sensing data.

Author Contributions

Conceptualisation, C.M.J., E.A., I.A. and M.A.M.; validation, C.M.J. and E.A.; formal analysis, C.M.J. and E.A.; data curation, C.M.J.; writing—original draft preparation, C.M.J.; writing—review and editing, E.A., I.A. and M.A.M.; supervision, E.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors sincerely appreciate the academic editors and reviewers for their precious time and valuable comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Solberg, R.; Malnes, E.; Amlien, J.; Danks, F.; Haarpaintner, J.; Høgda, K.-A.; Johansen, B.E.; Karlsen, S.R.; Koren, H. State of the art for tropical forest monitoring by remote sensing. In A Review Carried Out for the Ministry for the Environment of Norway and the Norwegian Space Centre; Norwegian Computing Centre: Oslo, Norway, 2008; p. 11. [Google Scholar]
  2. Gibson, L.; Lee, T.M.; Koh, L.P.; Brooks, B.W.; Gardner, T.A.; Barlow, J.; Peres, C.A.; Bradshaw, C.J.A.; Laurance, W.F.; Lovejoy, T.E.; et al. Primary forests are irreplaceable for sustaining tropical biodiversity. Nature 2011, 478, 378–381. [Google Scholar] [CrossRef] [PubMed]
  3. Miettinen, J.; Stibig, H.J.; Achard, F. Remote sensing of forest degradation in Southeast Asia-Aiming for a regional view through 5-30 m satellite data. Glob. Ecol. Conserv. 2014, 2, 24–36. [Google Scholar] [CrossRef]
  4. Jackson, C.M.; Adam, E. Remote sensing of selective logging in tropical forests: Current state and future directions. iForest 2020, 13, 286–300. [Google Scholar] [CrossRef]
  5. Andersen, H.E.; Reutebuch, S.E.; McGaughey, R.J.; D’Oliveira, M.V.; Keller, M. Monitoring selective logging in western Amazonia with repeat lidar flights. Remote Sens. Environ. 2014, 151, 157–165. [Google Scholar] [CrossRef] [Green Version]
  6. Achard, F.; Eva, H.D.; Stibig, H.-J.; Mayaux, P.; Gallego, J.; Richards, T.; Malingreau, J.-P. Determination of deforestation rates of the world’s humid tropical forests. Science 2002, 297, 999–1002. [Google Scholar] [CrossRef] [Green Version]
  7. Edwards, D.P.; Tobias, J.; Sheil, D.; Meijaard, E.; Laurance, W.F. Maintaining ecosystem function and services in logged tropical forests. Trends Ecol. Evol. 2014, 29, 511–520. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. KWS. Mt Kenya Ecosystem Management Plan 2010–2020. Available online: https://www.kws.go.ke/file/1470/download?token=1lO6G3zI (accessed on 16 February 2019).
  9. NEMA. Kenya State of the Environment and Outlook 2010; Supporting the Delivery of Vision 2030. Available online: http://www.enviropulse.org/documents/Kenya_SOE.pdf (accessed on 3 January 2019).
  10. Kigomo, B.N. The growth of camphor (Ocotea usambarensis Engl.) in plantation in the eastern Aberdare range, Kenya. East Afr. Agri. For. J. 1987, 52, 141–147. [Google Scholar] [CrossRef]
  11. Dalagnol, R.; Phillips, O.L.; Gloor, E.; Galvao, L.S.; Wagner, F.H.; Locks, C.J.; Luiz, E.O.C.; Aragao, L.E. Quantifying canopy tree loss and gap recovery in tropical forests under low-intensity logging using VHR satellite imagery and airborne lidar. Remote Sens. 2019, 11, 817. [Google Scholar] [CrossRef] [Green Version]
  12. Betts, H.D.; Brown, L.J.; Stewart, G.H. Forest canopy gap detection and characterisation by the use of high-resolution Digital Elevation Models. N. Z. J. Ecol. 2005, 29, 95–103. [Google Scholar]
  13. Runkle, J.R. Guidelines and Sample Protocol for Sampling Forest Gaps; General technical report, PNW-GTR-283, USDA Forest Service; Pacific Northwest Research Station: Portland, OR, USA, 1992; 44p. [Google Scholar]
  14. Nakashizuka, T.; Katsuki, T.; Tanaka, H. Forest canopy structure analyzed by using aerial photographs. Ecol. Res. 1995, 10, 13–18. [Google Scholar] [CrossRef]
  15. Masiliūnas, D. Evaluating the Potential of Sentinel-2 and Landsat Image Time Series for Detecting Selective Logging in the Amazon. Master’s Thesis, Report GIRS-2017-27. Wageningen University and Research, Wageningen, The Netherlands, 2017. [Google Scholar]
  16. Hamunyela, E.; Verbesselt, J.; Herold, M. Using spatial context to improve early detection of deforestation from Landsat time series. Remote Sens. Environ. 2016, 172, 126–138. [Google Scholar] [CrossRef]
  17. Costa, O.B.; Matricardi, E.A.T.; Pedlowski, M.A.; Miguel, E.P.; Gaspar, R.O. Selective logging detection in the Brazilian Amazon. Floresta Ambient. 2019, 26, e20170634. [Google Scholar] [CrossRef] [Green Version]
  18. Ellis, P.; Griscom, B.; Walker, W.; Gonçalves, F.; Cormier, T. Mapping selective logging impacts in Borneo with GPS and airborne lidar. For. Ecol. Manag. 2016, 365, 184–196. [Google Scholar] [CrossRef] [Green Version]
  19. Spaias, L.; Suomlainen, J.; Tanago, J.G.D. Radiometric detection of selective logging in tropical forest using UAV-borne hyperspectral data and simulation of satellite imagery. In Proceedings of the 2016 European Space Agency Living Planet Symposium, Prague, the Czech Republic, 9–13 May 2016. [Google Scholar]
  20. Ota, T.; Ahmed, O.S.; Minn, S.T.; Khai, T.C.; Mizoue, N.; Yoshida, S. Estimating selective logging impacts on aboveground biomass in tropical forests using digital aerial photography obtained before and after a logging event from an unmanned aerial vehicle. For. Ecol. Manag. 2019, 433, 162–169. [Google Scholar] [CrossRef]
  21. Rex, F.; Silva, C.; Paula, A.; Corte, A.; Klauberg, C.; Mohan, M.; Cardil, A.; da Silva, V.S.; de Almeida, D.R.A.; Garcia, M.; et al. Comparison of statistical modeling approaches for estimating tropical forest aboveground biomass stock and reporting their changes in low-intensity logging areas using multi-temporal LiDAR data. Remote Sens. 2020, 12, 1498. [Google Scholar] [CrossRef]
  22. Kamarulzaman, A.; Wan Mohd Jaafar, W.S.; Abdul Maulud, K.N.; Saad, S.N.M.; Omar, H.; Mohan, M. Integrated segmentation approach with machine learning classifier in detecting and mapping post selective logging impacts using UAV imagery. Forests 2022, 13, 48. [Google Scholar] [CrossRef]
  23. Ghasemi, N.; Sahebi, M.R.; Mohammadzadeh, A. Biomass estimation of a temperate deciduous forest using wavelet analysis. IEEE Trans. Geosci. Remote Sens. 2013, 51, 765–776. [Google Scholar] [CrossRef]
  24. Zhou, J.; Guo, R.Y.; Sun, M.; Di, T.T.; Wang, S.; Zhai, J.; Zhao, Z. The effects of GLCM parameters on LAI estimation using texture values from Quickbird satellite imagery. Sci. Rep. 2017, 7, 7366. [Google Scholar] [CrossRef] [Green Version]
  25. Kim, K.I.; Jung, K.; Park, S.H.; Kim, H.J. Support vector machines for texture classification. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 1542–1550. [Google Scholar] [CrossRef] [Green Version]
  26. Suruliandi, A.; Jenicka, S. Texture-based classification of remotely sensed images. Int. J. Signal Imaging Syst. Eng. 2015, 8, 260–272. [Google Scholar] [CrossRef]
  27. Lucieer, A.; Stein, A.; Fisher, P. Multivariate texture-based segmentation of remotely sensed imagery for extraction of objects and their uncertainty. Int. J. Remote Sens. 2005, 26, 2917–2936. [Google Scholar] [CrossRef]
  28. Jenicka, S.; Suruliandi, A. Comparison of soft computing approaches for texture-based land cover classification of remotely sensed image. Res. J. Appl. Sci. Eng. Technol. 2015, 10, 1216–1226. [Google Scholar] [CrossRef]
  29. Haralick, R.M.; Shanmuga, K.; Dinstein, I. Textural features for image classification. IEEE Trans. Syst. Man Cyber. 1973, 3, 610–621. [Google Scholar] [CrossRef] [Green Version]
  30. Cross, G.R.; Jain, A. Markov random field texture models. IEEE Trans. Pattern Anal. Mach. Intell. 1983, 5, 25–39. [Google Scholar] [CrossRef] [PubMed]
  31. Unser, M. Texture classification and segmentation using wavelet frames. IEEE Trans. Image Process. 1995, 4, 1549–1560. [Google Scholar] [CrossRef] [Green Version]
  32. He, D.C.; Wang, L. Texture unit, texture spectrum, and texture analysis. IEEE Trans. Geosci. Remote Sens. 1990, 28, 509–513. [Google Scholar] [CrossRef]
  33. Arivazhagan, S.; Ganesan, L. Texture classification using wavelet transform. Pattern Recognit. Lett. 2003, 24, 1513–1521. [Google Scholar] [CrossRef]
  34. Ojala, T.; Pietikäinen, M.; Mäenpää, T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 971–987. [Google Scholar] [CrossRef]
  35. Rakwatin, P.; Longépé, N.; Isoguchi, O.; Shimada, M.; Uryu, Y. Mapping tropical forest using ALOS PALSAR 50m resolution data with multiscale GLCM analysis. In Proceedings of the 2010 IEEE International Geoscience and Remote Sensing Symposium, Honolulu, HI, USA, 25–30 July 2010; pp. 1234–1237. [Google Scholar] [CrossRef]
  36. Wang, T.; Zhang, H.; Lin, H.; Fang, C. Textural-spectral feature-based species classification of mangroves in Mai Po Nature Reserve from Worldview-3 imagery. Remote Sens. 2016, 8, 24. [Google Scholar] [CrossRef] [Green Version]
  37. Marissiaux, Q. Characterizing Tropical Forest Dynamics by Remote-Sensing Using Very High Resolution and Sentinel-2 Images. Master’s Thesis, Faculty of Bioengineers, Catholic University of Louvain, Ottignies-Louvain-la-Neuve, Belgium, 2018. [Google Scholar]
  38. Burnett, M.W.; White, T.D.; McCauley, D.J.; De Leo, G.A.; Micheli, F. Quantifying coconut palm extent on Pacific islands using spectral and textural analysis of very high-resolution imagery. Int. J. Remote Sens. 2019, 40, 1–27. [Google Scholar] [CrossRef]
  39. KFS. Mt. Kenya Forest Reserve Management Plan 2010–2019. Available online: http://www.kenyaforestservice.org/documents/MtKenya.pdf (accessed on 3 January 2019).
  40. Lange, S.; Bussmann, R.W.; Beck, E. Stand structure and regeneration of the subalpine Hagenia abyssinica of Mt. Kenya. Bot. Acta. 1997, 110, 473–480. [Google Scholar] [CrossRef]
  41. Bussmann, R.W.; Beck, E. Regeneration- and cyclic processes in the Ocotea-Forests (Ocotea usambarensis Engl.) of Mount Kenya. Verh. GfÖ. 1995, 24, 35–38. [Google Scholar]
  42. DigitalGlobe. The Benefits of the 8 Spectral Bands of WorldView-2. Available online: https://dg-cms-uploads-production.s3.amazonaws.com/uploads/document/file/35/DG-8SPECTRAL-WP_0.pdf (accessed on 2 February 2019).
  43. DigitalGlobe. WorldView-3. Above + Beyond. Available online: http://worldview3.digitalglobe.com/ (accessed on 2 February 2019).
  44. Li, H.; Jing, L.; Tang, Y. Assessment of pansharpening methods applied to worldview-2 imagery fusion. Sensors 2017, 17, 89. [Google Scholar] [CrossRef] [PubMed]
  45. Jackson, C.M.; Adam, E. A machine learning approach to mapping canopy gaps in an indigenous tropical submontane forest using WorldView-3 multispectral satellite imagery. Environ. Conserv. 2022, 49, 255–262. [Google Scholar] [CrossRef]
  46. Maxwell, A.E.; Warner, T.A.; Fang, F. Implementation of machine-learning classification in remote sensing: An applied review. Int. J. Remote Sens. 2018, 39, 2784–2817. [Google Scholar] [CrossRef] [Green Version]
  47. Fox, T.J.; Knutson, M.G.; Hines, R.K. Mapping forest canopy gaps using air-photo interpretation and ground surveys. Wildl. Soc. Bull. 2000, 28, 882–889. [Google Scholar] [CrossRef]
  48. Kupidura, P. The comparison of different methods of texture analysis for their efficacy for land use classification in satellite imagery. Remote Sens. 2019, 11, 1233. [Google Scholar] [CrossRef] [Green Version]
  49. Cui, H.; Qian, H.; Qian, L.; Li, Y. Remote sensing experts classification system applying in the land use classification in Guangzhou City. In Proceedings of the 2nd International Congress on Image and Signal Processing, CISP’09, Tianjin, China, 17–19 October 2009. [Google Scholar] [CrossRef]
  50. Cohen, W.B.; Spies, T.A. Estimating structural attributes of Douglas fir/western hemlock forest stands from Landsat and SPOT imagery. Remote Sens. Environ. 1992, 41, 1–17. [Google Scholar] [CrossRef]
  51. Bianconi, F.; Fernández, A. Rotation invariant co-occurrence features based on digital circles and discrete Fourier transform. Pattern Recognit. Lett. 2014, 48, 34–41. [Google Scholar] [CrossRef]
  52. Sokal, R.; Rohlf, J. Introduction to Biostatistics, 2nd ed.; Freeman and Company: New York, NY, USA, 1987. [Google Scholar]
  53. Tuominen, J.; Lipping, T. Spectral characteristics of common reed beds: Studies on spatial and temporal variability. Remote Sens. 2016, 8, 181. [Google Scholar] [CrossRef] [Green Version]
  54. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  55. Han, H.; Guo, X.; Yu, H. Variable selection using mean decrease accuracy and mean decrease Gini based on random forest. In Proceedings of the 7th IEEE International Conference on Software Engineering and Service Science (ICSESS), Beijing, China, 26–28 August 2016. [Google Scholar] [CrossRef]
  56. Vapnik, V. The Nature of Statistical Learning Theory, 2nd ed.; Springer: New York, NY, USA, 2000. [Google Scholar]
  57. Rodriguez-Galiano, V.F.; Ghimire, B.; Rogan, J.; Chica-Olmo, M.; Rigol-Sanchez, J.P. An assessment of the effectiveness of a random forest classifier for landcover classification. ISPRS J. Photogramm. Remote Sens. 2012, 67, 93–104. [Google Scholar] [CrossRef]
  58. Adam, E.; Mutanga, O.; Odindi, J.; Abdel-Rahman, E.M. Land-use/cover classification in a heterogeneous coastal landscape using RapidEye imagery: Evaluating the performance of random forest and support vector machines classifiers. Int. J. Remote Sens. 2014, 35, 3440–3458. [Google Scholar] [CrossRef]
  59. Negrón-Juárez, R.I.; Chambers, J.Q.; Marra, D.M.; Ribeiro, G.H.P.M.; Rifai, S.W.; Higuchi, N.; Roberts, D. Detection of subpixel treefall gaps with Landsat imagery in Central Amazon forests. Remote Sens. Environ. 2011, 115, 3322–3328. [Google Scholar] [CrossRef]
  60. Mutanga, O.; Adam, E.; Cho, M.A. High-density biomass estimation for wetland vegetation using WorldView-2 imagery and random forest regression algorithm. Int. J. Appl. Earth Obs. Geoinf. 2012, 18, 399–406. [Google Scholar] [CrossRef]
  61. Hethcoat, M.G.; Edwards, D.P.; Carreiras, J.M.B.; Bryant, R.G.; França, F.M.; Quegan, S. A machine learning approach to map tropical selective logging. Remote Sens. Environ. 2019, 221, 569–582. [Google Scholar] [CrossRef]
  62. Clausi, D.A. An Analysis of Co-Occurrence Texture Statistics as a Function of Grey Level Quantization. Can. J. Remote Sens. 2002, 28, 45–62. [Google Scholar] [CrossRef]
  63. Asner, G.P.; Kellner, J.R.; Kennedy-Bowdoin, T.; Knapp, D.E.; Anderson, C.; Martin, R.E. Forest canopy gap distributions in the southern Peruvian Amazon. PLoS ONE 2013, 8, e60875. [Google Scholar] [CrossRef] [Green Version]
  64. Baldauf, T.; Köhl, M. Use of TerraSAR-X for forest degradation mapping in the context of REDD. In Proceedings of the World Forestry Congress XIII, Buenos Aires, Argentina, 23 October 2009. [Google Scholar]
  65. Di Ruberto, C.; Fodde, G.; Putzu, L. Comparison of statistical features for medical colour image classification. In International Conference on Computer Vision Systems; Springer: Cham, Switzerland, 2015; Volume 9163, pp. 3–13. [Google Scholar] [CrossRef]
Figure 1. Map of the study area in Mt. Kenya Forest Reserve (MKFR) showing the location of (a) Chuka Forest, and (b) Chuka Forest and the adjacent locations.
Figure 1. Map of the study area in Mt. Kenya Forest Reserve (MKFR) showing the location of (a) Chuka Forest, and (b) Chuka Forest and the adjacent locations.
Geomatics 03 00014 g001
Figure 2. A true-colour image showing (a) two uncut Ocotea trees in the 2014 WorldView-2 image; (b) canopy gaps created after cutting the camphor trees in the 2019 WorldView-3 image.
Figure 2. A true-colour image showing (a) two uncut Ocotea trees in the 2014 WorldView-2 image; (b) canopy gaps created after cutting the camphor trees in the 2019 WorldView-3 image.
Geomatics 03 00014 g002
Figure 3. A subset of the WorldView-3 images of the study area: (a) band 8; (b) colour composite of bands 8, 5, and 3 (RGB).
Figure 3. A subset of the WorldView-3 images of the study area: (a) band 8; (b) colour composite of bands 8, 5, and 3 (RGB).
Geomatics 03 00014 g003
Figure 4. The relative importance of the LBP/GLCM model in discriminating between vegetated gaps, shaded gaps, and tree crowns by random forest. Error bars are 95% confidence intervals.
Figure 4. The relative importance of the LBP/GLCM model in discriminating between vegetated gaps, shaded gaps, and tree crowns by random forest. Error bars are 95% confidence intervals.
Geomatics 03 00014 g004
Figure 5. Subset classification results of image block D based on the multivariate texture classification (853-RGB) based on MLBP/MHOM distribution—(a) RF and (b) SVM.
Figure 5. Subset classification results of image block D based on the multivariate texture classification (853-RGB) based on MLBP/MHOM distribution—(a) RF and (b) SVM.
Geomatics 03 00014 g005
Table 1. Reference data and their description.
Table 1. Reference data and their description.
Class #ClassSampleDescription
Class 1Vegetated gapGeomatics 03 00014 i001Low-lying vegetation in the forest canopy
Class 2Shaded gapGeomatics 03 00014 i002Gaps in the forest canopy that are darker because of the shadows cast by the nearby tree crowns
Class 3Forest canopyGeomatics 03 00014 i003The topmost layer of a forest, mostly tree crowns with a few emergent trees having heights that shoot above the canopy.
Table 2. Similarity of texture features calculated by (a) G-statistic and (b) Euclidean distance. NCG—newly created gap, SG—shaded gap, VG—vegetated gap, and FC—forest canopy.
Table 2. Similarity of texture features calculated by (a) G-statistic and (b) Euclidean distance. NCG—newly created gap, SG—shaded gap, VG—vegetated gap, and FC—forest canopy.
(a) G-Statistic(b) Euclidean Distance
NCGSGVGFC NCGSGVGFC
NCG 9.017.151.43NCG 1067749
SG 8.022.53SG 9655
VG 0.38VG 41
FC FC
Table 3. The RF and SVM model optimisation parameters of the multivariate local binary pattern (MLBP) model.
Table 3. The RF and SVM model optimisation parameters of the multivariate local binary pattern (MLBP) model.
Block IDFeature Random ForestSupport Vector MachineBlock IDFeatureRandom ForestSupport Vector Machine
mtryntreeOOB ErrorGammaCostCV ErrormtryntreeOOB ErrorGammaCostCV Error
AMLBP/MHOM315000.1080.110000.103DMLBP/MHOM315000.0970.01100.091
MLBP/MCON245000.1281100.129 MLBP/MCON255000.1210.0110000.116
MLBP/MENT225000.2040.11000.202 MLBP/MENT245000.16511000.159
MLBP/MASM335000.2100.11000.212 MLBP/MASM35000.1990.0110000.191
MLBP/MCOR225000.1400.0110000.132 MLBP/MCOR335000.1360.1100.127
BMLBP/MHOM245000.1090.0110000.110EMLBP/MHOM25000.1011100.109
MLBP/MCON235000.12911000.122 MLBP/MCON215000.1160.11000.112
MLBP/MENT315000.1770.110000.174 MLBP/MENT355000.1841100.179
MLBP/MASM225000.1970.01100.190 MLBP/MASM215000.2080.11000.214
MLBP/MCOR255000.1401100.136 MLBP/MCOR265000.1490.011000.136
CMLBP/MHOM335000.1090.01100.102FMLBP/MHOM355000.10611000.108
MLBP/MCON345000.1260.1100.119 MLBP/MCON315000.1240.0110000.119
MLBP/MENT365000.2100.0110000.204 MLBP/MENT345000.2051100.201
MLBP/MASM235000.2000.0110000.197 MLBP/MASM25000.1991100.194
MLBP/MCOR225000.1391100.134 MLBP/MCOR315000.1350.11000.138
Table 4. Confusion matrices for random forest and support vector machine classifiers for the MLBP/MHOM, the MLBP/MCON, the MLBP/MENT, the MLBP/MASM, and the MLBP/MCOR models for image block D.
Table 4. Confusion matrices for random forest and support vector machine classifiers for the MLBP/MHOM, the MLBP/MCON, the MLBP/MENT, the MLBP/MASM, and the MLBP/MCOR models for image block D.
MLBP/MHOM
Random ForestSupport Vector Machine
FCSGVGTotalUA (%) FCSGVGTotalUA (%)
FC28133288FC28023093
SG029029100SG13013294
VG20272993VG10272896
Total30303090 Total30303090
PA (%)939790 PA (%)9310090
Overall accuracy = 93.3% Kappa = 0.90%Overall accuracy = 94.4% Kappa = 0.92%
MLBP/MCON
Random ForestSupport Vector Machine
FCSGVGTotalUA (%) FCSGVGTotalUA (%)
FC27133190FC28123190
SG12813091SG02923194
VG21262982VG20262893
Total30303090 Total30303090
PA (%)909387 PA (%)939787
Overall accuracy = 90.0% Kappa = 0.85%Overall accuracy = 92.2% Kappa = 0.88%
MLBP/MENT
Random ForestSupport Vector Machine
FCSGVGTotalUA (%) FCSGVGTotalUA (%)
FC25343278FC25333181
SG32613087SG32713187
VG21252889VG20262893
Total30303090 Total30303090
PA (%)838783 PA (%)839087
Overall accuracy = 84.4%; Kappa = 0.77%Overall accuracy = 86.7%; Kappa = 0.80%
MLBP/MASM
Random ForestSupport Vector Machine
FCSGVGTotalUA (%) FCSGVGTotalUA (%)
FC23232882FC25343278
SG42533278SG32713187
VG33243080VG20252793
Total30303090 Total30303090
PA (%)778380 PA (%)839083
Overall accuracy = 80.0% Kappa = 0.70%Overall accuracy = 85.6% Kappa = 0.78%
MLBP/MCOR
Random ForestSupport Vector Machine
FCSGVGTotalUA (%) FCSGVGTotalUA (%)
FC26343385FC26233184
SG22713090SG22803093
VG20252778VG20272993
Total30303090 Total30303090
PA (%)879083 PA (%)879390
Overall accuracy = 86.7% Kappa = 0.80%Overall accuracy = 89.0% Kappa = 0.84%
Table 5. The overall accuracy results of the six image blocks.
Table 5. The overall accuracy results of the six image blocks.
Image Blocks
Texture DescriptorABCDEF
RF classifierMLBP/MHOM91.489.891.093.392.992.4
MLBP/MCON83.983.285.390.092.386.7
MLBP/MENT82.683.982.484.483.482.6
MLBPMASM81.181.481.180.082.980.0
MLBP/MCOR84.484.484.586.783.585.5
Average OA84.7 ± 4.084.5 ± 3.284.9 ± 3.886.9 ± 5.187.0 ± 5.185.4 ± 4.7
SVM classifierMLBP/MHOM92.690.893.194.491.291.8
MLBP/MCON83.288.190.292.295.190.2
MLBP/MENT82.885.780.386.783.782.8
MLBP/MASM81.185.981.485.680.981.3
MLBP/MCOR87.585.586.590.085.584.5
Average OA85.4 ± 4.687.2 ± 2.386.3 ± 5.589.9 ± 3.787.3 ± 5.886.1 ± 4.6
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jackson, C.M.; Adam, E.; Atif, I.; Mahboob, M.A. Feature Extraction and Classification of Canopy Gaps Using GLCM- and MLBP-Based Rotation-Invariant Feature Descriptors Derived from WorldView-3 Imagery. Geomatics 2023, 3, 250-265. https://doi.org/10.3390/geomatics3010014

AMA Style

Jackson CM, Adam E, Atif I, Mahboob MA. Feature Extraction and Classification of Canopy Gaps Using GLCM- and MLBP-Based Rotation-Invariant Feature Descriptors Derived from WorldView-3 Imagery. Geomatics. 2023; 3(1):250-265. https://doi.org/10.3390/geomatics3010014

Chicago/Turabian Style

Jackson, Colbert M., Elhadi Adam, Iqra Atif, and Muhammad A. Mahboob. 2023. "Feature Extraction and Classification of Canopy Gaps Using GLCM- and MLBP-Based Rotation-Invariant Feature Descriptors Derived from WorldView-3 Imagery" Geomatics 3, no. 1: 250-265. https://doi.org/10.3390/geomatics3010014

APA Style

Jackson, C. M., Adam, E., Atif, I., & Mahboob, M. A. (2023). Feature Extraction and Classification of Canopy Gaps Using GLCM- and MLBP-Based Rotation-Invariant Feature Descriptors Derived from WorldView-3 Imagery. Geomatics, 3(1), 250-265. https://doi.org/10.3390/geomatics3010014

Article Metrics

Back to TopTop