Next Article in Journal
Predicting Stem Total and Assortment Volumes in an Industrial Pinus taeda L. Forest Plantation Using Airborne Laser Scanning Data and Random Forest
Next Article in Special Issue
Near-Real-Time Monitoring of Insect Defoliation Using Landsat Time Series
Previous Article in Journal
Post-Fire Salvage Logging Imposes a New Disturbance that Retards Succession: The Case of Bryophyte Communities in a Macaronesian Laurel Forest
Previous Article in Special Issue
Using Intra-Annual Landsat Time Series for Attributing Forest Disturbance Agents in Central Europe
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Lidar and Multispectral Imagery Classifications of Balsam Fir Tree Status for Accurate Predictions of Merchantable Volume

1
Department of Wood and Forest Sciences, Université Laval, Quebec City, QC G1V 0A6, Canada
2
Department of Geography, Université du Québec à Montréal, Montréal, QC H3C 3P8, Canada
3
USDA Forest Service, PNW Research Station, Portland, OR 97205, USA
*
Author to whom correspondence should be addressed.
Forests 2017, 8(7), 253; https://doi.org/10.3390/f8070253
Submission received: 31 May 2017 / Revised: 9 July 2017 / Accepted: 11 July 2017 / Published: 15 July 2017
(This article belongs to the Special Issue Remote Sensing of Forest Disturbance)

Abstract

:
Recent increases in forest diseases have produced significant mortality in boreal forests. These disturbances influence merchantable volume predictions as they affect the distribution of live and dead trees. In this study, we assessed the use of lidar, alone or combined with multispectral imagery, to classify trees and predict the merchantable volumes of 61 balsam fir plots in a boreal forest in eastern Canada. We delineated single trees on a canopy height model. The number of detected trees represented 92% of field trees. Using lidar intensity and image pixel metrics, trees were classified as live or dead with an overall accuracy of 89% and a kappa coefficient of 0.78. Plots were classified according to their class of mortality (low/high) using a 10.5% threshold. Lidar returns associated with dead trees were clipped. Before clipping, the root mean square errors were of 22.7 m3 ha−1 in the low mortality plots and of 39 m3 ha−1 in the high mortality plots. After clipping, they decreased to 20.9 m3 ha−1 and 32.3 m3 ha−1 respectively. Our study suggests that lidar and multispectral imagery can be used to accurately filter dead balsam fir trees and decrease the merchantable volume prediction error by 17.2% in high mortality plots and by 7.9% in low mortality plots.

1. Introduction

Remote sensing has greatly improved the quality and the efficiency of forest inventories. Light detection and ranging (lidar) is an active remote sensing tool which uses pulses of light to measure the distance to and record the strength of light backscattering from a target. Lidar generates point clouds which are a three-dimensional representation of the volumetric interaction between pulse photons and illuminated objects. Point clouds have been used to model forest attributes such as tree height, diameter at breast height (dbh), biomass, basal area, and volume [1,2,3,4,5].
Lidar intensity, defined as the quantification of the strength of the pulse backscattering received from the object, has been proven useful in feature extraction [6], species identification [7], land-cover classification [8], forest attribute modelling [9,10], snag detection [11], and point cloud filtering [12]. For example, Bright et al. [9] used lidar intensity to estimate dead basal areas in beetle-affected pine forests. Kim et al. [10] observed that live and dead trees have different intensity distribution modes and that a segmentation method can predict the standing tree biomass of burned mixed coniferous forests. Casas et al. [13] used lidar intensity to detect dead trees. Lidar has also been combined with other remote sensing tools such as optical imagery to improve dead tree identification [14,15,16].
Many of these studies have been carried out on sites where insect, disease or fire disturbances have caused significant damage to the forests [9,10,11,12]. However, we did not find any study demonstrating the ability of lidar sensors to detect dead trees for sites with low mortality where, for example, only natural mortality is observed. Yet, accurate information on live trees alone (e.g., merchantable volume MV, live biomass) are mandatory in an operational context as they are directly linked to the economic value of timber. Improved estimates of tree mortality are also fundamental for a sustainable forest management. There is, therefore, a need to investigate the contribution of lidar and multispectral imagery when predicting live tree attributes in high and low mortality contexts.
The aim of our study is to develop and evaluate a methodology capable of discriminating live from dead trees and to assess the accuracy of merchantable volumes predicted from filtered lidar point clouds.

2. Materials and Methods

2.1. Study Area

The study was conducted in the Montmorency Forest, a teaching and research facility of Université Laval. The forest is located in central Quebec, QC, Canada (47.3° N, 71.1° W), about 70 km north of Quebec City (Figure 1). Elevation at the site ranges from approximately 460 to 1040 m above sea level. The mean annual precipitation is 1589 mm and the mean annual temperature is 0.3 °C [17]. The site lies on the Laurentian Plateau and is part of the balsam fir, white birch domain. It is mainly composed of conifers: balsam fir (Abies balsamea [L.] Miller), black spruce (Picea mariana [Miller], and white spruce (Pinus glauca [Moench] Voss). White birch (Betula papyrifera Marshall) and trembling aspen (Populus tremuloides) are also common [18]. Spruce budworm (Choristoneura fumiferana) disturbance and the hemlock looper (Lambdina fiscellaria) major outbreak in 2012 have caused cumulative defoliation and increased the mortality in stands [19,20].

2.2. Field Plot Data

We used field data of 61 permanent sample plots (radius = 11.28 m, area = 400 m2) which were inventoried between 2010 and 2012. The plots were selected based on three features: (1) balsam fir was the dominant species, (2) stands were not cut since 1996, and (3) the minimum tree height was 7 m. The plot centers were georeferenced using a dual-frequency antenna GPS (Trimble Yuma, accuracy ~30 cm). All merchantable trees with a diameter at breast height (dbh) above 9 cm were measured using a diameter tape. Tree height was measured on eight sample trees per plot using a Vertex hypsometer. A height–diameter equation (non-linear random coefficient regression) was adjusted for each plot and used to predict the height of the other merchantable trees following [21]:
H = 1.3 + [ DBH 2 × ( β 1 i + β 2 i × DBH 2 ] + ε
with β 1 , 2 i = β 0 + β 1 , 2 × z i + δ i . where H is the total height, DBH the diameter at breast height, β the model parameters of the i t h plot, z some explanatory variables, and ε and δ the random error terms. The correlation was high (R² = 94.1%, root mean square error RMSE = 1.1 m). The merchantable volume was then computed with volume equations based on the height and the diameter of live merchantable trees as described by Fortin et al. [22]. The total plot merchantable volume was computed as the sum of the individual tree merchantable volumes.
A total of 4000 live trees and 471 dead trees was recorded in the 61 plot tally sheets. The mortality ratio was computed as the mean proportion of dead trees in a plot. It was of 10.5% in average and ranged between 0% and 32.2%. The average value was used as a threshold to classify field plots into two classes of mortality: low mortality class (mortality ratio ≤ 10.5%) and high mortality class (mortality ratio > 10.5%). Forty-two plots had a mortality ratio below 10.5% (low mortality) and nineteen above 10.5% (high mortality).
The classification of trees by their status (live/dead) required a sample of live and dead trees and their geospatial location to train and assess the classifier. However, no stem map was produced during the field inventories. A field trip was conducted in July 2016 (four to six years after the field inventory) to retrieve the spatial location of some random trees recorded as dead during the field inventories and that were still standing. We obtained the geospatial locations of the sample live trees from the lidar canopy height models of plots with zero mortality. Tree locations were validated by multispectral imagery. Table 1 provides a summary of the field data used for the study site.

2.3. Lidar Data

The airborne lidar data were acquired in August 2011 with an ALTM 3100 discrete return fixed gain sensor. The laser wavelength was 1064 nm. The mean flying altitude was 1000 m above ground level and the horizontal accuracy ~18 cm. The pulse repetition rate was 100 kHz. The scanning angle range was 0–22° and there was a 50% strip overlap. The sensor could record up to four returns per pulse. Information for each return was recorded in LAS files [23]. The above-ground return density was 6.8 m−2. The ground classified returns were interpolated to construct a digital terrain model (DTM) within a 25 cm × 25 cm grid. The digital surface model (DSM) was generated by interpolating returns with the highest elevation values in each grid cell. The canopy height model (CHM) was computed by subtracting the DTM from the DSM. The elevation of each return was converted (normalized) to above-ground heights using the underlying DTM model. Lidar data processing was done with LAStools software [24].
The intensity of the first returns above 2 m was normalized to reduce the electronic noise and improve the consistency of the measurements. This was done following a method described by Gatziolis [25], which hypothesizes that neighboring lidar returns (e.g., distance <1 m) from two different swaths falling on the same target should have similar intensities and that any major discrepancy between their intensity values is mainly indicative of differences in the range. The intensity was corrected using the following equation:
I n o r m = I o b s   ( R R r e f ) f
where I n o r m is the corrected intensity, I o b s the observed intensity, R the distance (range) between a return and the corresponding position of the lidar instrument, R r e f a reference distance specified by the user, and f an exponent. f can have a value between 2.0 and 4.0 depending on several factors such as the return type or the nature of the scatterers hit by lidar pulses. Its optimal value is usually between 2.0 and 2.5 in forest studies. Considering the lidar flying altitude and the coniferous forest of the study site, R r e f was set to 1000 and f to 2. The canopy intensity model was built by interpolating the returns with the highest corrected intensity values within a 25 cm × 25 cm grid. Table 2 provides a summary of the lidar data for the study site.

2.4. Multispectral Imagery Data

Multispectral images of Montmorency Forest were acquired in June 2012 with a Microsoft-Vexcel UltraCam XP aerial camera flown at 2450 m above ground in panchromatic, blue, green, red and near-infrared (nir) channels. The lateral and forward overlaps were 30%, and 80% respectively. The panchromatic images had a spatial resolution of approximately 10 cm and an 8-bit pixel depth. Pansharpening was applied to the 40 cm resolution multispectral channels to enhance them to 10 cm. Image aero-triangulation was carried out by the vendor. We performed image orthorectification using the resulting absolute orientations and the DSM derived from the lidar. This was done using ®Summit Evolution software. No radiometric standardization was performed on the multispectral imagery.

2.5. Data Analysis

The lidar point cloud and the multispectral images were clipped to each plot’s boundary to extract the corresponding returns and image pixels respectively. The data were then analyzed following four main steps: (1) lidar-based tree crown delineation, (2) individual tree crown metric calculation, (3) random forest classification of tree status, and (4) merchantable volume accuracy assessment. Figure 2 presents a summary of the workflow. The classification and statistical analyses were done with R software [26].

2.5.1. Lidar Tree Crown Delineation

The delineation of individual tree crowns (ITC) was performed on the CHM using an in-house delineation algorithm. The algorithm applies an adaptive Gaussian filtering to the raster CHM before detecting local maxima. The maxima are then used as seeds for the region growing, which extend the crowns until stopping conditions are met. Each delineated CHM region is then defined as an individual tree crown (ITC). A more detailed description of the algorithm can be found in St-Onge et al. [27].

2.5.2. Tree Crown Metric Calculation

A polygon shapefile of the ITCs was created using ArcGIS (®ESRI). The shapefile was used as a mask to clip lidar returns and image pixels falling inside the polygons. Lidar intensity metrics were calculated for each ITC: minimum (min_I), maximum (max_I), mean (mean_I), mode (mode_I) and standard deviation (sd_I). Likewise, pixel metrics in each multispectral image band were calculated for each ITC: minimum (min_blue, min_green, min_red, min_nir), maximum (max_blue, max_green, max_red, max_nir), mean (mean_blue, mean_green, mean_red, mean_nir), mode (mode_blue, mode_green, mode_red, mode_nir), and standard deviation (sd_blue, sd_green, sd_red, sd_nir). Several normalized differences and ratios were computed: normalized difference vegetation index (ndvi), green ndvi (gndvi), simple ratio vegetation index (sr), and green/red vegetation index (grvi):
n d v i = n i r r e d n i r + r e d
g n d v i = n i r g r e e n n i r + g r e e n
s r = n i r r e d
g r v i = g r e e n r e d g r e e n + r e d
The ITCs were classified as live or dead using lidar intensity metrics first. Another classification was performed using the combined lidar intensity and image pixel metrics.

2.5.3. Random Forest Classification

Random forest (rf) is a non-parametric classification method developed by Breiman [28]. In this method, binary decision trees are built for classification using bootstrap samples of a training dataset without pruning. A subset of explanatory variables is randomly selected to split the nodes of the decision trees. The final class of the dataset is determined based on a majority vote of the decision trees [29,30,31]. Users can set the number of trees to grow and the number of random explanatory variables at each split. This prevents overfitting the model and allows for robust error estimation with the test dataset [28,32]. The contribution of each variable to the classification can also be assessed by measuring the mean decrease in the Gini index i.e. the total decrease in node impurities from splitting on a given variable averaged over all decision trees [33].
We used the R-package “randomForest” [33] to predict the status (live/dead) of the ITCs based on the crown reflectance characteristics. The number of trees was set to 1000 and of explanatory variables to 8. The training dataset consisted of 126 randomly selected ITCs that were previously identified as “dead” or “live” from the field data (see Section 2.2). The producer’s and user’s accuracy, the omission and commission errors, the kappa index, and the mortality ratio were computed for each classification method. Cross-validation was performed to assess the classification accuracy.
We then calculated the standard normal deviate Z [34] between the kappa indices at the 95 percent probability level to verify whether the classification methods were significantly different . A threshold value of 1.96 was used, above which the difference between the two classification methods was considered significant. The lidar point clouds were then filtered to remove returns associated with dead tree crowns.

2.5.4. Merchantable Volumes Accuracy Assessment

We tested whether the point cloud filtering improved the prediction accuracy of the merchantable volume . MV was modeled using a three-variable non-linear model (Equation (7)), adapted from Bouvier et al. [1]. We chose this model as it has been developed based on robust variables and can be applied in various forest types (coniferous, deciduous and mountainous forest types). Lidar explanatory variables were derived from the point clouds before or after the filtering. They were then used in the same predictive model:
MV = exp ( β 0 ) × HEIGHT β 1 × GAP β 2 × HETEROGENEITY β 3 + ε
where HEIGHT is the mean height of first returns; GAP, the proportion of first returns below a threshold value; HETEROGENEITY, a canopy surface roughness index; βi the parameters; and ε the residual variance. The threshold value of the GAP variable was set to 2 m to filter returns associated with the ground or understory [3]. HETEROGENEITY was computed as the ratio of the DSM to the DTM, following [35].
We compared the MV models before filtering (initial MV model) and after filtering (filtered MV model) using the pseudo-R2, the RMSE and the corrected Akaike Information Criterion (AICc). AICc assesses the quality of a model based on its goodness of fit and the number of parameters.
The best MV model was validated using the leave-one-out cross-validation procedure. A new dataset was created by removing one field plot. The MV model was fitted to the new dataset (training data) and used to predict the MV of the removed field plot (test data). The procedure was repeated iteratively until predicted values were obtained for all field plots.

3. Results

3.1. Lidar Intensity Distribution

Figure 3 shows the relative frequency distribution of the first-return normalized intensities of eight typical field plots with different mortality ratios. Low mortality plots tended to have a leptokurtic unimodal distribution with a mode intensity ranging between 34 and 58. Conversely, high mortality plots tended to have a platykurtic bimodal intensity distribution with a mode intensity ranging between 5 and 18 and between 42 and 63. The number of returns in these intensity ranges was higher compared to low mortality plots.

3.2. Delineation and Classification Results

The delineation algorithm detected 4116 ITCs. This represents 92% of trees detected compared to the number of trees in the field. It is likely that small and suppressed trees were undetected either by the sensor or the algorithm. Figure 4 shows an example of delineated and classified tree crowns of a given field plot.
Table 3 presents the confusion matrices of the tree status classification of the training dataset. We found a good overall accuracy (87%) and kappa value (0.74) when identifying live and dead trees with lidar intensity metrics exclusively. The best metrics were mean_I and mode_I (mean decrease in Gini index = 7.88 and 7.48 respectively; see Figure 5). The lidar and multispectral imagery combination slightly improved the classification. The overall accuracy increased to 89% and the kappa value to 0.78. The most important variables were mode_I, mean_I, min_I, max_I, mean_nir, mean_green, min_blue and mode_red. Their mean decrease in Gini index were 5.24, 4.68, 2.23, 1.52, 0.47, 0.45, 0.34 and 0.32 respectively. The difference between the two classification methods was not significant (Z = 0.39).
We then predicted the tree status of the other ITCs (Table 4). Both classifications underestimated the number of dead trees compared to field observation (313 for the lidar-based classification and 353 for the combined lidar–multispectral imagery-based classification, versus 471 for the field data). Consequently, the average mortality ratio was lower (7.0% for the lidar-based classification and 7.9% for the combined lidar–multispectral imagery-based classification, versus 10.5% for the field data). Both classifications estimated well the mortality ratios of the low and high mortality classes: 1.8% and 16.1% (for the lidar-based classification), and 3.0% and 20.5% (for the combined lidar–multispectral imagery-based classification).

3.3. Merchantable Volume Accuracy Assessment

Table 5 shows a comparison of the merchantable volume predictive models built before and after the point cloud filtering. We found a very good correlation between the field and the predicted MVs even with the initial MV model (pseudo-R2 = 0.88, AICc: 599.3, p < 0.01). The RMSE was 31.3 m3 ha−1 for the regression model, and 33.1 m3 ha−1 after cross-validation. There was a slight improvement of the predictions with the filtered MV models. The best results were obtained with the combined lidar–multispectral imagery MV model (pseudo-R2 = 0.90, AICc = 590.1, p < 0.01). The RMSE was 27.7 m3 ha−1 for the regression model and 29.6 m3 ha−1 after cross-validation.
Table 6 shows the RMSEs of the MV models by classes of mortality. We found a higher RMSE in the high mortality plots compared to the low mortality plots (39 m3 ha−1 versus 22.7 m3 ha−1) before the point cloud filtering. This represented a difference in the prediction error of 16.3 m3 ha−1 between the classes of mortality. The residual variability decreased substantially after the point cloud filtering. The best results were obtained with the combined lidar–multispectral imagery MV model (32.3 for the high mortality plots and 20.9 m3 ha−1 for the low mortality plots). This represented a difference in the prediction error of 11.4 m3 ha−1 between the classes of mortality. Overall, the MV prediction errors decreased by 17.2% in the high mortality plots and by 7.9% in the low mortality plots compared to the initial MV model.

4. Discussion

In this study, lidar was used to delineate tree crowns and discriminate tree mortality in balsam fir plots. Monospectral lidar sensors often use a 1064 nm, near-infrared wavelength. Near-infrared wavelengths are well-suited to identify variability in vegetation reflectance as they penetrate through the inner layers (mesophyll) of leaf cells. Strong backscattering is measured for healthy trees due to leaf scattering [36,37]. Lidar intensity can, therefore, be used as a classifier of the status (live or dead) of trees. Kim et al. [10] and Wing et al. [12] found similar results on mixed conifer forests.
Figure 3 shows that plots with different mortality ratios have different lidar intensity distributions. The importance of lidar intensity for the identification of dead trees has often been studied in forests where dead trees are abundant, such as in high mortality sites [9,10,11,12,38]. This study confirms that the lidar intensity can discriminate well interspersed dead trees in low mortality sites (mortality ratio <10.5%) where, for example, only natural mortality is observed. It also shows that lidar can predict the level of mortality within plots efficiently (Table 4) and that a multimodal lidar intensity distribution can be an indication of mortality in balsam fir plots.
A simpler tree status classification could rely on the use of lidar intensity distribution modes as threshold values. However, the range of lidar intensity varies among study sites due to several factors such as the sensor’s settings (e.g., pulse energy and divergence), the backscattering properties of the target, the atmospheric conditions, the distance between the sensor and the target, the foliage surface humidity, etc. The threshold values would therefore differ. Kim et al. [10], for example, suggested an arbitrary threshold intensity of 125 (intensity quantized to 8-bit), Wing et al. [12] a value of 60, and we found a maximal intensity value of 20 for dead trees.
Another simpler filtering of returns could have been done by systematically removing from the point cloud returns with a lidar intensity below a given threshold (single classifier). However, this would have led to significant changes in the structure of the point cloud as returns associated with other less reflective targets (e.g., lower tree branches) would also be removed. We chose to combine several classifiers to filter only returns associated with dead tree crowns.
Multispectral bands provided additional information to the classification, as live and dead trees also have different reflective properties in the visible part of the spectrum [14]. For example, the chlorophyll a leaf pigment has a high light absorbance in the near-infrared and green wavelengths in live tree leaves and conversely a low peak absorbance in dead tree leaves. The mean pixel value in the near infrared (mean_nir) and in the green (mean_green) image bands were the most efficient classifiers. Most image pixel metrics, however, made a marginal contribution to the overall classification (see Figure 5). The difference between the lidar and the combined lidar–multispectral imagery classification was not significant (Z = 0.39). This may be explained by the absence of standardization of the multispectral image intensities or by the relatively small size of the sample dataset. With a larger dataset, the contribution of the multispectral imagery would likely have been more substantial. Our finding, nevertheless, confirms that lidar is an effective classification tool for forest attributes and can be combined with multispectral imagery [15,16,39,40].
The classification results were used to filter lidar returns associated with dead ITCs. The point cloud filtering decreased the merchantable volume prediction errors in both the low and high mortality classes (Table 6). This indicates the need to filter point clouds before modeling live tree attributes when mortality occurs.
A constraint in the study was the absence of stem maps produced at the time of the field inventory. As our study site used data collected in permanent sample plots where merchantable trees are labeled and remeasured every five years, we were able to conduct a field trip (four to six years after the field inventory) to georeference some dead trees that were still standing. Also, we could not verify our prediction accuracy at the tree level. The predicted mortality ratios at the plot level were, however, similar to the field data. These results suggest that our method can predict the mortality ratio at the plot level. The method is applicable in similar study areas with permanent plots where there is no field information on tree locations. In an operational context, our study presents a method to predict merchantable volume in sites where both low and high mortality plots are present.

5. Conclusions

Balsam fir plots have different intensity distributions when mortality occurs. Our study tested lidar as a tool to delineate and classify trees and to accurately predict the merchantable volume. Our finding confirms that lidar alone is an effective classification tool for live and dead trees. Combining lidar with multispectral imagery slightly improved the classification (overall accuracy = 89%, kappa value = 0.78). This study demonstrates that lidar combined with multispectral imagery can efficiently discriminate dead trees even in low mortality sites (mortality ratio ≤10.5%).
It was also found that mean prediction errors in merchantable volume estimates increase when mortality occurs. The filtering of returns associated with dead tree crowns prior to the modeling phase improved the model accuracy (pseudo-R2 from 0.88 to 0.90) and decreased the merchantable volume prediction errors (by 17.2% in the high mortality plots and by 7.9% in the low mortality plots).

Acknowledgments

The authors would like to thank the Fonds de recherche du Québec-Nature et technologies (FRQNT) and the Ministère des Forêts, de la Faune et des Parcs (MFFP) of the province of Quebec for the financial support of this research work. We would like to thank the Montmorency Forest for the access to the data used in this study. We are also grateful to the Applied Geomatics Research Group (AGRG, Nova Scotia Community College, Middleton, NS, Canada), the Canadian Consortium for lidar Environmental Applications Research (C-CLEAR, Centre of Geographic Sciences, Lawrencetown, NS, Canada) and Dr. Christopher Hopkinson (University of Lethbridge, Lethbridge, AB, Canada), for providing the lidar data at a reduced cost. Special thanks to Dr. Martin Riopel (MFFP, Quebec City, QC, Canada) for his assistance in the field data processing. We are also grateful to the anonymous reviewers for their valuable comments on the manuscript.

Author Contributions

Sarah Yoga, Jean Bégin, and Benoît St-Onge proposed the theoretical framework of the study. Benoît St-Onge assisted in the delineation of individual tree crowns. Demetrios Gatziolis carried out the normalization of the lidar intensity. Sarah Yoga conducted the processing of the data and the statistical analyses. Sarah Yoga wrote the manuscript. All authors contributed to the interpretation of results and the editing of the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bouvier, M.; Durrieu, S.; Fournier, R.A.; Renaud, J.-P. Generalizing predictive models of forest inventory attributes using an area-based approach with airborne lidar data. Remote Sens. Environ. 2015, 156, 322–334. [Google Scholar] [CrossRef]
  2. Maltamo, M.; Eerikäinen, K.; Packalén, P.; Hyyppä, J. Estimation of stem volume using laser scanning-based canopy height metrics. Forestry 2006, 79, 217–229. [Google Scholar] [CrossRef]
  3. Næsset, E. Predicting forest stand characcteristics with airborne scanning laser using a practical two-stage procedure and field data. Remote Sens. Environ. 2002, 80, 88–99. [Google Scholar] [CrossRef]
  4. Sheridan, R.D.; Popescu, S.C.; Gatziolis, D.; Morgan, C.L.; Ku, N.-W. Modeling forest aboveground biomass and volume using airborne lidar metrics and forest inventory and analysis data in the Pacific Northwest. Remote Sens. 2015, 7, 229–255. [Google Scholar] [CrossRef]
  5. Treitz, P.; Lim, K.; Woods, M.; Pitt, D.; Nesbitt, D.; Etheridge, D. Lidar sampling density for forest resource inventories in Ontario, Canada. Remote Sens. 2012, 4, 830–848. [Google Scholar] [CrossRef]
  6. Hu, X.; Tao, C.V.; Hu, Y. Automatic Road Extraction from Dense Urban Area by Integrated Processing of High Resolution Imagery and Lidar Data. In Proceedings of the International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Istanbul, Turkey, 12–23 July 2004; Volume 35, p. B3. [Google Scholar]
  7. Fassnacht, F.E.; Latifi, H.; Stereńczak, K.; Modzelewska, A.; Lefsky, M.; Waser, L.T.; Straub, C.; Ghosh, A. Review of studies on tree species classification from remotely sensed data. Remote Sens. Environ. 2016, 186, 64–87. [Google Scholar] [CrossRef]
  8. Song, J.-H.; Han, S.-H.; Yu, K.; Kim, Y.-I. Assessing the possibility of land-cover classification using lidar intensity data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2002, 34, 259–262. [Google Scholar]
  9. Bright, B.C.; Hudak, A.T.; McGaughey, R.; Andersen, H.E.; Negrón, J. Predicting live and dead tree basal area of bark beetle affected forests from discrete-return lidar. Can. J. Remote Sens. 2013, 39, S99–S111. [Google Scholar] [CrossRef]
  10. Kim, Y.; Yang, Z.; Cohen, W.B.; Pflugmacher, D.; Lauver, C.L.; Vankat, J.L. Distinguishing between live and dead standing tree biomass on the North Rim of Grand Canyon National Park, USA using small-footprint lidar data. Remote Sens. Environ. 2009, 113, 2499–2510. [Google Scholar] [CrossRef]
  11. Martinuzzi, S.; Vierling, L.A.; Gould, W.A.; Falkowski, M.J.; Evans, J.S.; Hudak, A.T.; Vierling, K.T. Mapping snags and understory shrubs for a lidar-based assessment of wildlife habitat suitability. Remote Sens. Environ. 2009, 113, 2533–2546. [Google Scholar] [CrossRef]
  12. Wing, B.M.; Ritchie, M.W.; Boston, K.; Cohen, W.B.; Olsen, M.J. Individual snag detection using neighborhood attribute filtered airborne lidar data. Remote Sens. Environ. 2015, 163, 165–179. [Google Scholar] [CrossRef]
  13. Casas, A.; Garcia, M.; Siegel, R.B.; Koltunov, A.; Ramirez, C.; Ustin, S. Burned forest characterization at single-tree level with airborne laser scanning for assessing wildlife habitat. Remote Sens. Environ. 2016, 175, 231–241. [Google Scholar] [CrossRef]
  14. Näsi, R.; Honkavaara, E.; Lyytikäinen-Saarenmaa, P.; Blomqvist, M.; Litkey, P.; Hakala, T.; Viljanen, N.; Kantola, T.; Tanhuanpää, Y.; Holopainen, M. Using UAV-based photogrammetry and hyperspectral imaging for mapping bark beetle damage at tree-level. Remote Sens. 2015, 7, 15467–15493. [Google Scholar] [CrossRef]
  15. Polewski, P.; Yao, W.; Heurich, M.; Krzystek, P.; Stilla, U. Active Learning Approach to Detecting Standing Dead Trees from ALS Point Clouds Combined with Aerial Infrared Imagery. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Boston, MA, USA, 7–12 June 2015. [Google Scholar]
  16. Vogeler, J.C.; Yang, Z.; Cohen, W.B. Mapping post-fire habitat characteristics through the fusion of remote sensing tools. Remote Sens. Environ. 2016, 173, 294–303. [Google Scholar] [CrossRef]
  17. Canada Environment and Natural Resources. 1971–2000 Climate Normals & Averages. Available online: http://climate.weather.gc.ca/climate_normals/index_e.html (accessed on 1 May 2017).
  18. Bélanger, L. La forêt mosaïque comme stratégie de conservation de la biodiversité de la sapinière boréale de l’Est: L’expérience de la forêt Montmorency—Mosaic cutting as a biodiversity conservation strategy in eastern boreal balsam fir forests: The case study of the Montmorency forest. Le Naturaliste canadien 2001, 125, 18–25. [Google Scholar]
  19. Régnière, J.; St-Amant, R.; Duval, P. Predicting insect distributions under climate change from physiological responses: Spruce budworm as an example. Biol. Invasions 2012, 14, 1571–1586. [Google Scholar] [CrossRef]
  20. Hébert, C.; Jeffrey, O.; Boucher, J.; Dubuc, Y.; Berthiaume, R.; MacLean, D.; Bauce, E. Forest Susceptibility and Vulnerability to Hemlock Looper as a Framework for Developing an Optimal Detection and Monitoring Strategy. In Proceedings of the SERG International Workshop Proceedings, Pittsburgh, PA, USA, 4–6 February 2014; pp. 168–186. [Google Scholar]
  21. Bégin, J.; Raulier, F. Comparaison de différentes approches, modèles et tailles d’échantillons pour l’établissement de relations hauteur-diamètre locales—Local height-diameter relationships: Comparison of approaches, models and sample sizes. Can. J. For. Res. 1995, 25, 1303–1312. [Google Scholar] [CrossRef]
  22. Fortin, M.; DeBlois, J.; Bernier, S.; Blais, G. Mise au point d’un tarif de cubage général pour les forêts québécoises: Une approche pour mieux évaluer l’incertitude associée aux prévisions—Establishing a general cubic volume table for Québec forests: An approach to better assess prediction uncertainties. For. Chron. 2007, 83, 754–765. [Google Scholar]
  23. ASPRS. ASPRS LIDAR Data Exchange Format Standard Version 1.0 May 9, 2003. Available online: http://www.asprs.org/wp-content/uploads/2010/12/asprs_las_format_v10.pdf (accessed on 1 May 2017).
  24. Isenburg, M. “LAStools—Efficient LiDAR Processing Software” (Version 150526, Unlicensed). Available online: http://rapidlasso.com/LAStools (accessed on 1 May 2017).
  25. Gatziolis, D. Dynamic range-based intensity normalization for airborne, discrete return lidar data of forest canopies. Photogramm. Eng. Remote Sens. 2011, 77, 251–259. [Google Scholar] [CrossRef]
  26. R Development Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2017. [Google Scholar]
  27. St-Onge, B.; Audet, F.-A.; Bégin, J. Characterizing the Height Structure and Composition of a Boreal Forest Using an Individual Tree Crown Approach Applied to Photogrammetric Point Clouds. Forests 2015, 6, 3899–3922. [Google Scholar] [CrossRef]
  28. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  29. Chehata, N.; Guo, L.; Mallet, C. Airborne lidar feature selection for urban classification using random forests. Int. Arch. Photogramme. Remote Sens. Spat. Inf. Sci. 2009, 38, W8. [Google Scholar]
  30. Genuer, R.; Poggi, J.-M.; Tuleau-Malot, C. Variable selection using random forests. Pattern Recognit. Lett. 2010, 31, 2225–2236. [Google Scholar] [CrossRef]
  31. Rodriguez-Galiano, V.F.; Ghimire, B.; Rogan, J.; Chica-Olmo, M.; Rigol-Sanchez, J.P. An assessment of the effectiveness of a random forest classifier for land-cover classification. ISPRS J. Photogramm. Remote Sens. 2012, 67, 93–104. [Google Scholar] [CrossRef]
  32. Matsuki, K.; Kuperman, V.; Van Dyke, J.A. The Random Forests statistical technique: An examination of its value for the study of reading. Sci. Stud. Read. 2016, 20, 20–33. [Google Scholar] [CrossRef] [PubMed]
  33. Liaw, A.; Wiener, M. Classification and Regression by randomForest. R News 2002, 2, 18–22. [Google Scholar]
  34. Rosenfield, G.H.; Fitzpatrick-Lins, K. A coefficient of agreement as a measure of thematic classification accuracy. Photogramm. Eng. Remote Sens. 1986, 52, 223–227. [Google Scholar]
  35. Kane, V.R.; McGaughey, R.J.; Bakker, J.D.; Gersonde, R.F.; Lutz, J.A.; Franklin, J.F. Comparisons between field-and lidar-based measures of stand structural complexity. Can. J. For. Res. 2010, 40, 761–773. [Google Scholar] [CrossRef]
  36. Gates, D.M.; Keegan, H.J.; Schleter, J.C.; Weidner, V.R. Spectral properties of plants. Appl. Opt. 1965, 4, 11–20. [Google Scholar] [CrossRef]
  37. Lorenzen, B.; Jensen, A. Reflectance of blue, green, red and near-infrared radiation from wetland vegetation used in a model discriminating live and dead above ground biomass. New Phytol. 1988, 108, 345–355. [Google Scholar] [CrossRef]
  38. Wing, M.G.; Eklund, A.; Sessions, J. Applying lidar technology for tree measurements in burned landscapes. Int. J. Wildland Fire 2010, 19, 104–114. [Google Scholar] [CrossRef]
  39. Bright, B.C.; Hicke, J.A.; Hudak, A.T. Estimating aboveground carbon stocks of a forest affected by mountain pine beetle in Idaho using lidar and multispectral imagery. Remote Sens. Environ. 2012, 124, 270–281. [Google Scholar] [CrossRef]
  40. Bolton, D.K.; Coops, N.C.; Wulder, M.A. Measuring forest structure along productivity gradients in the Canadian boreal with small-footprint lidar. Environ. Monit. Assess. 2013, 185, 6617–6634. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Location of the Montmorency Forest study site and the field plots over a lidar digital terrain model.
Figure 1. Location of the Montmorency Forest study site and the field plots over a lidar digital terrain model.
Forests 08 00253 g001
Figure 2. Workflow diagram describing the steps from the lidar and multispectral imagery data processing to the classification of individual tree crowns and the prediction of the merchantable volume.
Figure 2. Workflow diagram describing the steps from the lidar and multispectral imagery data processing to the classification of individual tree crowns and the prediction of the merchantable volume.
Forests 08 00253 g002
Figure 3. Normalized intensity distributions of lidar first returns (above 2 m) for eight field plots having different mortality ratios. Low mortality plots (mortality ratio <10.5%) are represented in solid lines and high mortality plots in dashed lines.
Figure 3. Normalized intensity distributions of lidar first returns (above 2 m) for eight field plots having different mortality ratios. Low mortality plots (mortality ratio <10.5%) are represented in solid lines and high mortality plots in dashed lines.
Forests 08 00253 g003
Figure 4. Tree crowns delineated and classified for the field plot #124: (A) Individual tree crowns delineated from the canopy height model. (B) Canopy intensity model of first returns (above 2 m). (C) Individual tree crowns classified as dead. Low height/intensity values are depicted in dark tones and high height/intensity values in light tones.
Figure 4. Tree crowns delineated and classified for the field plot #124: (A) Individual tree crowns delineated from the canopy height model. (B) Canopy intensity model of first returns (above 2 m). (C) Individual tree crowns classified as dead. Low height/intensity values are depicted in dark tones and high height/intensity values in light tones.
Forests 08 00253 g004
Figure 5. Variable importance plot for the lidar-based classification (left) and the combined lidar + multispectral imagery-based classification of live and dead tree crowns.
Figure 5. Variable importance plot for the lidar-based classification (left) and the combined lidar + multispectral imagery-based classification of live and dead tree crowns.
Forests 08 00253 g005
Table 1. Summary of the 61 field plot measurements for the Montmorency Forest study site.
Table 1. Summary of the 61 field plot measurements for the Montmorency Forest study site.
Live TreesDead TreesAll
MeanStd*RangeMeanStd*RangeMeanStd*Range
Average diameter of dominant trees (cm)14.44.89.1–46.812.32.99.1–26.313.42.89.1–46.8
Field attributesDominant height (m)16.34.39.2–26.512.32.39.2–17.414.23.09.2–26.5
Stand density (ha −1)1534501561–3078175500–3051832185561–3078
Merchantable volume (m3 ha −1)159.28921.0–411.9---------159.28921.0–411.9
Mortality ratio (%)------------------10.511.70.0–32.2
Std*: standard deviation.
Table 2. Summary of the lidar data for the Montmorency Forest study site.
Table 2. Summary of the lidar data for the Montmorency Forest study site.
Lidar returnsMeanStd*Range
First return height (m)16.34.37.0–25.7
Ground return density (m−2)5.31.43.0–9.1
Above ground return density (m−2)6.82.03.2–13.6
First return normalized intensity40.014.01.0–208.0
Std*: standard deviation.
Table 3. Confusion matrices of two random forest classifications using lidar classifiers only (upper) or combined with multispectral imagery classifiers (lower).
ReferenceProducer’s Accuracy (%)User’s Accuracy (%)Omission Error (%)Commission Error (%)
ClassLiveDeadTotal
Lidar Live5365984.189.815.910.2
Dead10576790.585.19.514.9
Total6363126
Overall accuracy = 87%; kappa = 0.74.
ReferenceProducer’s Accuracy (%)User’s Accuracy (%)Omission Error (%)Commission Error (%)
ClassLiveDeadTotal
Lidar + multispectral imageryLive5786590.587.79.512.3
Dead6556187.390.212.79.8
Total6363126
Overall accuracy = 89%; kappa = 0.78.
Table 4. Random forest predictions of tree status and mortality ratios after a lidar-based classification or after a combined lidar-multispectral imagery-based classification.
Table 4. Random forest predictions of tree status and mortality ratios after a lidar-based classification or after a combined lidar-multispectral imagery-based classification.
TreesMortality Ratio (% of Dead Trees)
LiveDeadTotalLow Mortality ClassHigh Mortality ClassAverage
Reference400047144714.319.510.5
Lidar380331341161.816.17.0
Lidar + multispectral imagery376335341163.020.57.9
Table 5. Comparison of merchantable volume (MV) models built before or after a point cloud filtering.
Table 5. Comparison of merchantable volume (MV) models built before or after a point cloud filtering.
MV ModelsAICcPseudo-R2RMSE (m3 ha−1)%RMSEp-Value
Model 1599.30.8831.319.6<0.01
Model 2594.70.8928.718.0<0.01
Model 3590.10.9027.717.4<0.01
Model 1: MV predicted before filtering; Model 2: MV predicted after a filtering based on a lidar classification of dead tree crowns; Model 3: MV predicted after a filtering based on a combined lidar–multispectral imagery classification of dead tree crowns.
Table 6. Root mean square errors (RMSEs) of the merchantable volume (MV) models presented by the class of mortality.
Table 6. Root mean square errors (RMSEs) of the merchantable volume (MV) models presented by the class of mortality.
MV ModelRMSE (m3 ha−1)
Low Mortality ClassHigh Mortality ClassOverall
Model 122.739.031.3
Model 221.633.528.7
Model 320.932.327.7
Model 1: MV predicted before the point cloud filtering; Model 2: MV predicted after a point cloud filtering based on a lidar classification of dead tree crowns; Model 3: MV predicted after a point cloud filtering based on a combined lidar–multispectral imagery classification of dead tree crowns.

Share and Cite

MDPI and ACS Style

Yoga, S.; Bégin, J.; St-Onge, B.; Gatziolis, D. Lidar and Multispectral Imagery Classifications of Balsam Fir Tree Status for Accurate Predictions of Merchantable Volume. Forests 2017, 8, 253. https://doi.org/10.3390/f8070253

AMA Style

Yoga S, Bégin J, St-Onge B, Gatziolis D. Lidar and Multispectral Imagery Classifications of Balsam Fir Tree Status for Accurate Predictions of Merchantable Volume. Forests. 2017; 8(7):253. https://doi.org/10.3390/f8070253

Chicago/Turabian Style

Yoga, Sarah, Jean Bégin, Benoît St-Onge, and Demetrios Gatziolis. 2017. "Lidar and Multispectral Imagery Classifications of Balsam Fir Tree Status for Accurate Predictions of Merchantable Volume" Forests 8, no. 7: 253. https://doi.org/10.3390/f8070253

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop