Next Article in Journal
Ecological Factors Influencing Norway Spruce Regeneration on Nurse Logs in a Subalpine Virgin Forest
Next Article in Special Issue
Shifts in Forest Structure in Northwest Montana from 1972 to 2015 Using the Landsat Archive from Multispectral Scanner to Operational Land Imager
Previous Article in Journal
Temporal and Spatial Change in Diameter Growth of Boreal Scots Pine, Norway Spruce, and Birch under Recent-Generation (CMIP5) Global Climate Model Projections for the 21st Century
Previous Article in Special Issue
Hyperspectral Analysis of Pine Wilt Disease to Determine an Optimal Detection Index
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quantifying Boreal Forest Structure and Composition Using UAV Structure from Motion

1
Department of Environmental Science, American University, Washington, DC 20016, USA
2
USDA Forest Service, Pacific Northwest Research Station, Seattle, WA 98195, USA
3
Biospheric Sciences Laboratory, NASA’s Goddard Space Flight Center, Greenbelt, MD 20771, USA
*
Author to whom correspondence should be addressed.
Forests 2018, 9(3), 119; https://doi.org/10.3390/f9030119
Submission received: 1 February 2018 / Revised: 27 February 2018 / Accepted: 1 March 2018 / Published: 5 March 2018

Abstract

:
The vast extent and inaccessibility of boreal forest ecosystems are barriers to routine monitoring of forest structure and composition. In this research, we bridge the scale gap between intensive but sparse plot measurements and extensive remote sensing studies by collecting forest inventory variables at the plot scale using an unmanned aerial vehicle (UAV) and a structure from motion (SfM) approach. At 20 Forest Inventory and Analysis (FIA) subplots in interior Alaska, we acquired overlapping imagery and generated dense, 3D, RGB (red, green, blue) point clouds. We used these data to model forest type at the individual crown scale as well as subplot-scale tree density (TD), basal area (BA), and aboveground biomass (AGB). We achieved 85% cross-validation accuracy for five species at the crown level. Classification accuracy was maximized using three variables representing crown height, form, and color. Consistent with previous UAV-based studies, SfM point cloud data generated robust models of TD (r2 = 0.91), BA (r2 = 0.79), and AGB (r2 = 0.92), using a mix of plot- and crown-scale information. Precise estimation of TD required either segment counts or species information to differentiate black spruce from mixed white spruce plots. The accuracy of species-specific estimates of TD, BA, and AGB at the plot scale was somewhat variable, ranging from accurate estimates of black spruce TD (+/−1%) and aspen BA (−2%) to misallocation of aspen AGB (+118%) and white spruce AGB (−50%). These results convey the potential utility of SfM data for forest type discrimination in FIA plots and the remaining challenges to develop classification approaches for species-specific estimates at the plot scale that are more robust to segmentation error.

1. Introduction

Boreal forests store approximately 27% of global aboveground biomass [1] and contain 50% of carbon stored in organic soils [2]. The stability of these carbon pools is uncertain, as boreal forests and other high-latitude ecosystems have warmed twice as fast as temperate and tropical regions [3]. Climate warming may also alter site productivity and species composition, given the potential for northward migration of forest and woody shrub species [4] and disturbance-driven biome boundary shifts with increasing frequency and severity of stand-replacing wildfires [1,5,6].
Precise, extensive, and spatially-explicit measurements of boreal forest structure and composition are critical to understanding the local, regional, and global consequences of ecosystem responses to climate warming. The vast extent and inaccessibility of the boreal region has hampered forest inventories using traditional field plots; time series data are only available for a sparse network of inventory sites accessible from major roads or navigable rivers. Remote sensing approaches to characterize forest composition [6], structure [7], and productivity [8] sample larger boreal forest landscapes and have served to constrain regional- and global-scale estimates of carbon pools and fluxes [9]. However, passive optical imagery from satellite platforms provides limited structural information, especially given the lack of a well-distributed network of calibration and validation sites [10,11].
Airborne lidar data are routinely used to quantify three-dimensional forest structure [12,13,14,15] and link field measurements with coarse resolution remote sensing data. Given the cost to acquire airborne lidar data, targeted sampling is often conducted in coordination with ground survey data to inventory large forested areas [16,17,18]. Ene et al. [18] combined field inventory information with lidar strip samples across their study region of interior Alaska to improve the precision of aboveground biomass (AGB) estimates by 11–55% compared to field data alone. Acknowledging the remoteness of their boreal Saskatchewan study area, Zald et al. [19] used 25 m lidar plots as surrogates for field data. Subsequently, the authors modeled forest structure wall to wall based on the relationship between lidar height metrics and Landsat pixel-based composites.
More recently, unmanned aerial vehicles (UAV) have become popular platforms for fine scale mapping of relatively small spatial extents. As with lidar, this technology shows potential to augment field-based forest inventory sampling with an extended set of “digital plots”, given the relative ease and frequency with which numerous locations can be sampled at fine spatial scale. High-resolution orthophotos [20,21] and structure from motion (SfM) plus multi-view stereo (MVS) [22,23,24,25,26] have been shown to be effective for vegetation mapping across a wide range of forest biomes. SfM + MVS (hereafter “SfM”) allows for the generation of extremely dense 3D point clouds from overlapping images using computationally advanced photogrammetric methods [27,28]. Beyond x, y, and z locations, these point clouds include color (e.g., red, green, blue; RGB) or spectral information that may be useful to improve models of forest structure [23] and to monitor species composition and forest health [29]. Specifically, there is a growing body of work suggesting that UAVs can provide precise estimates of tree density, basal area, and aboveground biomass by forest type [22,23,24,29]. While most attempts to estimate plot-level variables make use of area-based estimation using SfM-derived height metrics, others leverage individual crown information following segmentation [23]. Michez et al. [29] segmented an RGB orthophoto and CHM constructed from SfM to gather spectral and structural metrics suitable for tree species classification in a riparian forest environment. They found that RGB spectral information such as red:green ratio and normalized red blue index were more useful than any structural information, near infrared spectral information, or segment-scale texture information for species classification.
UAV SfM can also increase the information content at existing plots. At established USDA (United States Department of Agriculture) Forest Service Forest Inventory and Analysis (FIA) plots, measurements are currently taken on a 5-year cycle in the eastern states and a 10-year cycle in the west, including coastal Alaska [30]. UAV data could be collected between measurement years to improve sensitivity to disturbance and recovery processes (similar to Alonzo et al. [10]) or to monitor forest health conditions [29]. Further, it is possible that tree heights, particularly under conditions of high canopy closure, could be measured more accurately with a dense point cloud compared to standard field measurements [31,32]. It is also possible that inventory measurements (e.g., canopy cover) based on ocular estimates could be made with improved precision using nadir images from a UAV platform.
UAV SfM has four advantages when compared to airborne lidar acquisitions. First, UAV SfM is less costly to acquire, particularly when using “hobbyist-grade” equipment [33]. Second, UAV data can easily be acquired at high frequency to evaluate diurnal or seasonal changes such as phenology [34]. Third, SfM typically has much higher point density, often by more than an order of magnitude compared to typical airborne lidar data [33], providing fine-scale detail about forest structure (e.g., branching, leaf distribution, gaps). Finally, the combined structure and color information supports a range of science investigations that would not be possible using only the x, y, z point cloud from airborne lidar, including vegetation phenological change and forest health monitoring [29]. UAV data also have specific limitations when compared to airborne lidar data. Low flight altitudes, reliance on battery power, and regulatory requirements to maintain line-of-sight contact with the aircraft limit the spatial coverage of UAV flights to small areas [35]. Further, compared to lidar, point-cloud construction using SfM is more sensitive to lighting conditions [22], limited to visible crown surfaces, and subject to degradation in vegetation canopies even with a light breeze. Wind can limit UAV flight range and may reduce the accuracy of the 3D point cloud reconstruction due to a decrease in viable foliage and branch tie points [26].
In this study, we evaluated the utility of UAV-derived SfM data for generating key forest inventory parameters in an Alaskan boreal forest. Extending high-precision, plot-scale measurements in this region is particularly useful given the sparse network of forest monitoring plots. The specific objectives of this research were to use structural and spectral data from a UAV point cloud to (1) identify canopy-dominant boreal forest species at the individual crown scale; (2) estimate subplot-level tree density, basal area, and aboveground biomass; and (3) estimate the proportion tree density (TD), basal area (BA), and AGB per inventory plot by species.

2. Materials and Methods

2.1. Study Site and Field Data

The Bonanza Creek Long Term Ecological Research (BNZ LTER) site, 30 km southwest of Fairbanks (Figure 1a), is an experimental forest spanning a range of elevation, slope, and aspect positions across 210 km2. Canopy tree species include black spruce (Picea mariana Mill.), white spruce, (Picea glauca (Moench) Voss), birch (Betula papyrifera Marshall), and aspen (Populus tremuloides Michx.). Depending on soil moisture status and light availability, understory species may include a shrub layer of alder (Alnus Mill.) and willow species (Salix L.) or ground cover of lichens and mosses.
The BNZ Experimental Forest contains 74 forest inventory plots established following USDA Forest Service Forest Inventory and Analysis (FIA) protocols [30]. In this Phase 2 plot configuration, each plot consists of four subplots approximately 0.17 ha (1/24 acre) in area with a radius of 7.3 m (24.0 ft.). The four subplots are spaced 36.9 m (120.0 ft.) apart at (magnetic) azimuths of 360, 120, and 240 degrees from the center subplot. Each subplot contains a microplot of approximately 0.0013 ha (1/300 acre) in size with a radius of 2.1 m (6.8 ft.). At each subplot, all trees with diameter at breast height (DBH) greater than 12.7 cm (5 in) are measured following FIA protocols. Key structural measurements include species identification, diameter at breast height (DBH), tree height, and condition class. At microplots, structural measurements are taken for trees and saplings with DBH between 2.5 and 12.7 cm. In addition to tree measurements, a vegetation profile is acquired over each 0.17 ha FIA subplot providing cover at various height layers by vegetation growth form (trees, shrubs, forbs, graminoids) [30].
We selected five FIA plots representing a range of topographic positions, stand densities, and forest types (Table 1). The full suite of field measurements was taken at these plots from 2011 to 2014 by crews from University of Alaska-Fairbanks. Additional validation data for species identification and tree height measurements (n = 52) were collected at the five plots concurrent with UAV data collection. Tree positions were recorded as distances and azimuths from UAV ground control points which, in turn, were geolocated using differential GPS (Global Positioning System) (see details in Section 2.2).

2.2. UAV Data Collection and Processing

UAV data were collected near peak greenness from 26 to 28 July 2017. For our remote sensing data collection, we flew the DJI Phantom 4 Pro (DJI, Shenzhen, China), a widely available, consumer-grade quadcopter with a 20 Megapixel camera. Generation of data acquisition grids and flight parameters was conducted in the Android app, Pix4DCapture (Pix4D, Lausanne, Switzerland). We established one 75 m × 75 m box over each of the five plots, capturing all subplots in one flight. The first flight, following a single serpentine pattern, was set nominally at 100 m above ground level (AGL) with a nadir-pointing camera and 90% image overlap (Figure 1b). The second flight, following a double, perpendicular serpentine pattern, was set nominally for 70 m AGL with a camera angle 20° off nadir and 90% image overlap (Figure 1c). The first flight plan led to a nominal raw image ground sample distance (GSD) of 2.7 cm and the second to a GSD of 1.9 cm. The primary purpose of the second flight was to photograph each tree at high resolution from all four sides. Flight times for single grids and double grids were 4.5 min and 7.5 min respectively.
A minimum of five ground targets were used to georeference the UAV SfM data for each plot location. Prior to flights at each site, five light-weight, brightly-colored “pool noodle crosses” (two 1.2 m long foam pool toys bolted together to form a cross) were assembled and distributed in areas with sky view, near the four corners and the plot-center of each flight box. The easting, northing, and elevation of each pool noodle cross (hereafter “ground target”) was recorded for 100 to 150 s with either a Trimble Geo7x (Trimble, Sunnyvale, CA, USA) or Javad Triumph-1 (Javad GNSS, Inc., San Jose, CA, USA)differential GPS. Post-processing of GPS measurements at BZ3–BZ6 yielded a reported mean horizontal precision of 24 cm and vertical precision of 32 cm. On BZ7, perhaps due to high canopy closure, the ability to connect to satellites was limited and thus horizontal precision degraded to 71 cm and vertical precision to 105 cm. We collected GPS elevations at a minimum of five locations on each plot. At BZ6 and BZ7 respectively, we collected 25 and 19 additional GPS elevations for further digital terrain model (DTM) validation.
Data were collected under full sun conditions except at BZ4 where cloud cover was patchy with occasional light precipitation. BZ4 also presented topographic challenges (Table 1); due to rain, the launch point was moved from the plot center to the adjacent access road, downslope from the plot. Thus, 100 m AGL with respect to the launch point equated to a substantially lower AGL with respect to the highest elevation subplots, resulting in a small increase in canopy occlusion.
Raw (uncalibrated) image data were processed to a sparse point cloud using structure from motion (SfM) followed by densification leveraging multi-view stereo algorithms in the Pix4D software. We combined both the 100 m AGL and 70 m AGL imagery into one dataset for all processing after finding minimal impact (<0.1 cm change at three test sites) on GSD. We followed standard Pix4D processing procedures including manual matching of the ground targets to differentially-corrected ground control points. Final point cloud (las format) densities ranged from ~4000 pts∙m−2 to >12,000 pts∙m−2 depending on GSD, availability of viable tie points, and other acquisition parameters.
From the georectified point clouds, we generated DTMs at 0.25 m resolution and canopy height models at 0.02 m resolution following the method of Pingel et al. [34]. Based on the DTM product, point elevations were converted to heights above ground for further analysis.

2.3. Analysis

A primary objective of this research was to assess the accuracy with which boreal forest tree species can be classified at the crown scale using the RGB point cloud from UAV SfM. The five key steps for reaching this objective were to (1) segment the point cloud into individual crowns; (2) estimate structural and spectral variables for each segment; (3) classify each segment as birch, aspen, white spruce, black spruce, or shrub; (4) aggregate crowns to FIA subplot level for calculation of TD, BA, and AGB; and (5) model TD, BA, and AGB without segmentation at the subplot-level for purposes of comparison.
Segmentation of individual tree crowns is difficult, particularly in broadleaf, mixed, or multi-layered forests [32,36]. This is generally due to an inability to determine the appropriate kernel size to simultaneously minimize omission and commission error with respect to tree stem identification [36,37]. Nevertheless, higher resolution UAV data may allow for individual crown delineation, particularly in more open-canopy forests with white spruce and black spruce dominance [23]. For this research, we first segmented the smoothed canopy height model at each plot using watershed segmentation broadly following Chen et al. [38] and Alonzo et al. [39]. Due to extremely high tree density, particularly in black spruce areas, we subsequently selected all SfM points in each watershed segment and applied a mean-shift segmentation [36]. Mean-shift segmentation has proven useful in complex forests, in part due to the single, biophysically meaningful parameter requiring tuning by the user. We allowed this parameter, the kernel bandwidth, to vary between 0 and 3 depending on the 90th percentile height of SfM points in each crown segment (i.e., canopy height). This should, for example, lead to smaller segments in low productivity black spruce stands and larger segments in taller white spruce and mixed stands.
Following Alonzo et al. [39], we estimated a set of 23 structural and spectral parameters for each final crown segment (Table 2). The structural variables can broadly be classified as measurements of tree height/crown length (e.g., maximum tree height, crown base height) or crown shape (e.g., width at 98th percentile height, width at mean height). The spectral variables include the statistical moments of the vertical point distribution of normalized color intensities (e.g., (blue − green)/(blue + green)) [22,29].
Prior to classification, validation crowns (n = 123) comprising birch, aspen, white spruce, black spruce, and shrub, were manually delineated using tree locations from FIA inventories and 2017 field data collection. Structural attributes were calculated for validation crowns as described above. Using a canonical discriminant classifier (CDA) [40] and a cross validation approach, the best 1, 2, and 3 variable sets were selected using an n-choose-k exhaustive search. This search was run independently to maximize species-level accuracy and leaf-type level accuracy. The variable set minimizing the misclassification rate was selected and used to classify all segments on the 20 subplots. Finally, subplots were classified by forest type (black spruce or mixed) by determining the dominant crown-scale classification after weighting by AGB.
We estimated tree density (TD), basal area (BA) and aboveground, dry-weight biomass (AGB) for each subplot. TD was modeled based on the number of crown segments with heights over 5 m. This height threshold was roughly consistent with allometric relationships between DBH and tree height for smaller micro-plot trees (i.e., those with DBH between 2.54 and 12.7 cm). BA was modeled through allometric estimation of crown segment DBH using correlated structural metrics. Separate BA models were estimated, one for broadleaf and one for needleleaf species. AGB was modeled at the subplot level from the sum of individual crown volumes measured using 75th percentile segment (ht_75p) height and crown width at median height (wid_at_med). Ordinary least squares regression was used for all models in this study with one predictor and one response variable.
For comparison, we modeled TD, BA, and AGB directly at the subplot level (area-based approach) using standard lidar techniques based on the return distributions aggregated at the subplot scale. Specifically, we calculated median height (ht_med_sp where “sp” indicates “subplot”), 75th percentile height (ht_75p_sp), 90th percentile height (ht_90p_sp), 98th percentile height (ht_98p_sp), skewness of the heights (ht_skw_sp), and kurtosis of the heights (ht_kurt_sp). Each model was estimated using only one of these predictor variables. Field-estimated TD and BA were log-transformed prior to modeling after visual assessment of bivariate plots including height metrics.

3. Results

3.1. UAV DTM

Except for BZ7, DTMs generated from the SfM point cloud and GPS elevations had a mean absolute error of <1 m (Figure 2). At BZ3, BZ5 (Figure 1c), and BZ6, fractional cover was low enough to allow for ample photographic coverage of ground. Gaps in the canopy at BZ4 and BZ7 (Figure 1b) were small but spatially distributed, thus allowing for the establishment of terrain points. Greater deviation between DTM and GPS elevations at BZ7 likely reflects errors with the GPS solution, given similar offsets with lidar-derived DTM data from NASA (National Aeronautics and Space Administration)’s Goddard Lidar Hyperspectral Thermal platform (G-LiHT) [41] (Figure 2e,f). It should be noted that for BZ3, BZ4, and BZ5, with only five GPS points available, georeferencing and DTM evaluation were carried out with the same set of points. This can inflate estimates of accuracy, but very low canopy cover and slope at BZ3 and BZ5 likely mitigate this issue. Furthermore, the additional DTM evaluation points at BZ6 and BZ7 convey robust DTM validation on a moderate slope and canopy cover plot (BZ6) and compared to airborne lidar (BZ7).

3.2. Crown Structural Estimates

SfM heights were strongly correlated with field-measured heights (r2 = 0.94, MAE = 1.2; Figure 3a). Residual disagreement between SfM and field height estimates may reflect (1) the three- to six-year period between FIA measurement and UAV flight; (2) the inability to match the exact location of height measurements in broadleaf crowns; (3) the limited precision of field-based height measurements, especially in more closed-canopy conditions; and (4) UAV DTM inaccuracy, particularly underestimates of local terrain elevations (i.e., small mounds may be classified as the base of a tree instead of ground, leading to an overestimate of tree heights with SfM). Other structural metrics (Figure 3b; Table 2) as well as the quality of the underlying segmentation were validated indirectly through their use in classifying species and estimating basal area.

3.3. Species Classification Accuracy

Structural and spectral information derived from the SfM point cloud allowed for discrimination of our four tree species and one shrub class with 85% accuracy (kappa = 0.82). This result was produced using three variables, selected through exhaustive search: (1) Maximum crown height (ht_max); (2) crown width at 98th percentile height (wid_at_98p); and (3) median blueness (blue_med; Figure 4). The overall cross-validation accuracy at the leaf-type level was 90%, with a kappa score of 0.8. For leaf type discrimination only, the chosen variables were (1) median crown height (ht_med); (2) 90th percentile height (ht_90p); and (3) wid_at_98p. Due to their low stature and relatively bluish coloration, black spruce and shrubs had the highest producer (P) and user (U) accuracies (P = 1.00, U = 0.96; P = 0.94, U = 0.87 respectively). Birch (P = 0.79, U = 0.78), aspen (P = 0.74, U = 0.83), and white spruce (P = 0.77, U = 0.77) were all classified with similar accuracies averaging near 80%.

3.4. Estimates of TD, BA, and AGB

Segment count information or separate models for each forest type were necessary to model tree density. Based only on a single subplot-level model, the relationship between height percentiles and TD was poor (r2 = 0.06). TD was most accurately estimated using two subplot-level models (both based on ht_98_sp predictor variable), one for each forest type (r2 = 0.91, Root Mean Squared Error [RMSE] = 608 trees ha−1; Figure 5d). Nearly as accurate, however, was the segmented model where the primary input was a direct count of stems (r2 = 0.88, RMSE = 720 trees ha−1; Figure 5a).
At the crown-segment scale, species-specific allometries were used to scale height to DBH. DBH was estimated with high precision for needleleaf trees using ht_max and wid_at_75p (r2 = 0.75, RMSE = 2.5 cm). It was more difficult to use SfM canopy structural information to model DBH for broadleaf trees. Crown base height (cbh) and brightness skewness (bright_skw) only explained 35% of the variance in DBH of broadleaf trees (r2 = 0.35, RMSE = 5.0 cm). From crown-scale DBH, we aggregated to subplot-scale BA (r2 = 0.74, RMSE = 6.6 m2∙ha−1; Figure 5b). Nevertheless, BA was best modeled with no segment and no species information using the subplot-level skewness information (ht_skw_sp; r2 = 0.79, RMSE = 6.1 m2∙ha−1; Figure 5e). Specific allometric equations for estimating DBH from structural parameters are not shown due to their lack of generalizability and limited utility in this study.
Subplot AGB was most accurately modeled based on the sum of individual crown volumes (r2 = 0.92, RMSE = 13.5 Mg∙ha−1, Figure 5c), calculated by multiplying ht_75p by wid_at_med. Without incorporating segment or species information, modeled subplot AGB (using ht_75p_sp) had lower correspondence with field estimates (r2 = 0.74, RMSE = 23.7 Mg∙ha−1, Figure 5f).

3.5. Species Proportions by FIA Plot

There was broad agreement in species allocation across the five full plots (Figure 6). Using AGB as an example, black spruce vs. mixed (the important precursor to TD estimation) was classified with 100% accuracy at the subplot level. Across all plots and metrics, allocation of black spruce was highly accurate, with a deviation from field allocation of 1% for tree density, 13% for BA, and −4% compared to AGB (Table 3). Other species were more variable in their allocation accuracy; broadleaf species were frequently overestimated (e.g., Birch TD = 48%, BA = 71%, AGB = 47%) at the expense of white spruce (TD = −18%, BA = −24%, AGB = −50%). While BZ4 broadleaf/white spruce allocation is quite accurate, much of the overall inaccuracy is driven by birch commission error on BZ7 (Figure 6).
Estimates of species-specific inventory parameters derived from the SfM point cloud were largely consistent with field parameters. Modeled TD was 89% accurate when weighted by species contribution (analogous to area weighting). Estimated BA by species was 74% accurate. By contrast, modeled AGB by species was only 52% accurate compared to field measurements, due largely to the misallocation of white spruce AGB to broadleaf species.

4. Discussion

High-density point clouds from UAV images offer a promising approach to augment and extend existing plot inventories for boreal forests to bridge the spatial and temporal gaps in field data collection. To our knowledge, this study represents the first use of UAV SfM to simultaneously measure forest inventory variables (TD, BA, AGB) and proportional allocation of those quantities by tree species. Our broad finding is that the high-density structural and spectral information available via SfM allowed both for identification of canopy-dominant species and for more precise estimation of forest inventory quantities, leveraging either species or segment-count information in a manner not possible with airborne lidar data.

4.1. Segmentation

Segmentation of the SfM point cloud is an important precursor to estimating inventory attributes at the individual tree scale. Our algorithm linked raster-based, marker-controlled watershed segmentation with point-cloud based, mean-shift segmentation generated promising results (r2 = 0.88). A spatially consistent and forest-type independent correlation between SfM and field-estimated TD is useful in terms of generalizing the method for tree density estimation [23]. However, this correlation does not guarantee a high degree of accuracy with respect to strict delineation of individual crowns.
There is ample evidence of successful tree crown segmentation in forests with relatively simple structure such as sparse oak woodlands [38], urban forests [39], and even-aged Norway spruce stands [23]. However, in complex or extremely dense forests such as a productive black spruce stand, there is very limited physical basis for simultaneously minimizing both omission and commission error using near-nadir imagery. Our model relating subplot-level segment count to field estimated tree density took the form of TD = −228.53 + 1.86x, where x represents TD from the segmented SfM point cloud. The model slope indicates a consistently undersegmented landscape, which was particularly apparent in dense black spruce stands. Efforts to improve segmentation results, including through better height-adaptive tuning of the mean-shift kernel bandwidth parameter, are a priority for future work.

4.2. Species Classification at Crown and Plot Scale

Spectral information in the SfM point cloud aids the classification of tree species beyond what is possible using only lidar height and intensity information [42,43,44] or coincident image data [39,45]. In the boreal region, Alonzo et al. [10] showed that black spruce, white spruce, and broadleaf species could be differentiated with 73% accuracy (82% needleaf vs. broadleaf only) relying on lidar canopy structure, lidar terrain and Landsat phenological information. In the present study, we employed an intuitive set of three crown-level metrics calculated from the SfM point cloud at the crown-segment scale. The combination of a height variable (ht_max), a variable capturing crown form (wid_at_98p), and a color variable (blue_med) in a CDA classifier led to an 85% cross-validated overall accuracy for discriminating white spruce, black spruce, aspen, birch, and shrub.
Based on the ratio of within-species to among-species variance, blue_med appeared to be the best single variable for distinguishing among our five species (Figure 4). This variable was specifically useful for separating the two broadleaf species, birch and aspen, which were largely inseparable using only structural variables. In fact, all species could be discriminated with 60% accuracy using only this spectral variable. This result is consistent with Michez et al. [29] who found that a normalized red-blue index and other RGB ratio metrics drove separability in their riparian study area. White spruce had a very low mean and within-class variance in wid_at_98p, and, along with black spruce, was well differentiated from the broadleaf species with this additional information. It is important to note that, of our focus species, only black spruce was separable based on height alone. Of the three, this variable is the most robust to segmentation error and thus clarifies why black spruce proportions were mapped more accurately than other species at the plot scale (Table 3).
At the plot-scale, species were mapped most accurately with respect to tree density (89%). This accuracy was largely driven by the very high tree density of black spruce and the high accuracy of classifying that species, even without individual tree segmentation. Previous studies have similarly found that tree height is a useful input to classification algorithms due to its limited sensitivity to segmentation error [46,47]. Both BA and AGB field-proportion weighted accuracies (74% and 52% respectively) were lower due to higher proportional abundances of mixed forests and a consistent overmapping of broadleaf and undermapping of white spruce. We suspect that the lower classification performance is driven by undersegmentation in mixed plots, lowering the utility of the blue_med and particularly the wid_at_98p variables. Holmgren et al. [43] found that a variable similarly summarizing tree growth form was useful for separating Scots pine and Norway spruce, but their study area contained limited structural complexity and only two coniferous species. Techniques that lead to oversegmentation, as in Michez et al. [29], may also be beneficial in our study area.

4.3. Plot-Level Quantities (TD, BA, AGB)

Consistent with other studies using lidar or UAV SfM, we achieved high accuracy in modeling basic forest structural parameters [22,23,24,48]. Aggregated to the FIA plot scale, we mapped TD, BA, and AGB with errors of 3%, 2%, and −6%, respectively (Table 3). However, when disaggregated to the subplot level, average errors were higher (Figure 5).
Our two models for estimating tree density were similarly successful. The first, based solely on the stem count product of segmentation (RMSE = 720 trees∙ha−1) did not require information about species. The second, evaluated at the subplot-scale using ht_98p, required separate models for each forest type (RMSE = 608 trees ha−1). These results are consistent with the trees per ha estimate from Bonnet et al. [23]. While we achieved higher model agreement in terms of r2 (0.91 compared to 0.81), the more accurate segmentation in their study led to a lower %RMSE of 23.7 trees ha−1 versus ours at 33.9 trees ha−1. Puliti et al. [22] report TD error of 39.2% in their Norway spruce-dominated landscape, though it should be noted that they reported cross-validated error in all cases. Some of our accuracies may be lower across all forest variables, and particularly in low productivity black spruce stands, due to uncertain extrapolations of FIA microplot data.
Basal area was best estimated using neither species nor individual crown information (r2 = 0.79, RMSE = 6.1 m2∙ha−1). We suspect this is largely because of difficulty establishing an accurate canopy structure-to-DBH relationship for broadleaf trees. Here, our 45.9% RMSE is substantially higher than the 14.5% RMSE reported by Bonnet et al. [23] and the 15.4% reported by Puliti et al. [22], consistent with the high density of small-diameter trees in our boreal forest study area.
Segment volumes substantially improved modeled AGB compared to subplot-level height metrics (r2 = 0.92, RMSE = 13.5 Mg∙ha−1, %RMSE = 27.1). Bonnet et al. [23] found that their best models of canopy volume (r2 = 0.73, %RMSE = 18.5) were formulated from a combination of crown-level and plot-level statistics including the product of plot-level CHM “volume” and number of segments as well as percent canopy cover. Mixed findings in this study regarding the relative utility of segment-level and subplot-level information suggest that each scale of analysis contains information that is uniquely valuable for forest structure estimation.

4.4. Limitations and Lessons Learned

While this study is the first to simultaneously examine forest structure and species composition from UAV SfM in the boreal region, there are some limitations to report. First, our small sample size and temporal offset (3–6 years) between some field measurements and UAV flights introduced error into our study that is not reflective of either the technology or the modeling process. For instance, on BZ3, there was one 28 m white spruce tree measured in 2011 that had fallen over by 2017. This accounted for roughly half the basal area in that subplot. Due to this small sample size, we simply reported on the SfM and field relationships, rather than assessing accuracy using cross validation.
Second, UAV data acquisition was difficult along steep elevational gradients. With the consumer-grade DJI Phantom 4 Pro, we had trouble adjusting the AGL adequately to maintain constant GSD, and, more importantly, adequate image overlap at both high and low elevations. This was particularly problematic at BZ4 where, due to rain, we were forced to launch from a road down slope from the plot. Thus, data coverage at the higher elevations was degraded. Similar issues were mitigated at BZ6 using a multiple stair-stepping flight plan, but this would ideally be accomplished with greater precision and continuous AGL adjustment. At some level, our choice to fly at both 100 m AGL and 70 m AGL also mitigated the impact of elevational gradients, but this effect was not quantified.
A final lesson learned (and successfully implemented in this field campaign) was the importance of using ground targets that are large, bright, and unique in coloration. We ran a test campaign in which all targets were the same orange and silver stripe, assuming we would be able to differentiate them based on spatial context. In practice, this was quite difficult, as many photos of the forested study sites looked the same from 100 m AGL. Thus, in Bonanza Creek, it was very helpful to have a large bag of assorted pool noodles to assemble into a rainbow of ground targets.

5. Conclusions

Tree density, basal area, aboveground biomass, and species composition in boreal forests can be modeled with high accuracy using UAV SfM. Our study builds on existing research by demonstrating the potential for classifying boreal forest types at the individual crown level. We found that the inclusion of spectral information from the RGB point cloud was critical to supplement structural information in our simple discriminant analysis classifier. Further, spectral information and height are both potentially useful, even in the absence of high quality, crown-level segmentation. While models of BA and AGB were adequately specified with only canopy structural information, modeling TD across both black spruce and mixed forest types required stratification by species.
Beyond highlighting the value of UAV SfM for plot-level study, our results hint at the utility of SfM acquired across larger spatial extents by manned aircraft. For example, the G-LiHT platform [41] has recently been upgraded with a mapping-grade camera capable of achieving 3 cm GSD from its typical 330 m flight AGL. High density SfM point clouds acquired in this manner will greatly improve the detail with which we can monitor boreal forest species and canopy structure over increasingly large spatial extents. In combination, UAV and airborne SfM offer a promising approach to augment the spatial and temporal coverage of forest inventory measurements in boreal forests to track changes in forest structure and composition from climate warming, insect outbreaks, and large-scale disturbances from fire and permafrost degradation.

Acknowledgments

Funding for this research was provided by the NASA’s Carbon Monitoring System and the USDA Forest Service. The authors also thank Anthony Santana for assistance with SfM data processing.

Author Contributions

M.A., H.-E.A., D.C.M. and B.D.C. conceived and designed the project; M.A. and H.-E.A. performed the field data collection; M.A. processed and analyzed the data; H.-E.A., D.C.M. and B.D.C. contributed reagents/materials/analysis tools; M.A. wrote the paper with assistance from D.C.M. and H.-E.A.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chapin, F.S., III; McGuire, A.D.; Ruess, R.W.; Walker, M.W.; Boone, R.D.; Edwards, M.E.; Finney, B.; Hinzman, L.D.; Jones, J.B.; Juday, G.B. Summary and synthesis: Past and future changes in the Alaskan boreal forest. In Alaska’s Changing Boreal Forest; Oxford University Press: New York, NY, USA, 2006; pp. 332–338. [Google Scholar]
  2. Turetsky, M.R.; Kane, E.S.; Harden, J.W.; Ottmar, R.D.; Manies, K.L.; Hoy, E.; Kasischke, E.S. Recent acceleration of biomass burning and carbon losses in Alaskan forests and peatlands. Nat. Geosci. 2011, 4, 27–31. [Google Scholar] [CrossRef]
  3. Intergovernmental Panel on Climate Change. Climate Change 2007: The Physical Science Basis; Cambridge University Press: New York, NY, USA, 2007. [Google Scholar]
  4. Beck, P.S.A.; Goetz, S.J. Satellite observations of high northern latitude vegetation productivity changes between 1982 and 2008: Ecological variability and regional differences. Environ. Res. Lett. 2011, 6, 45501. [Google Scholar] [CrossRef]
  5. Morton, D.C.; Collatz, G.J.; Wang, D.; Randerson, J.T.; Giglio, L.; Chen, Y. Satellite-based assessment of climate controls on US burned area. Biogeosciences 2013, 10, 247–260. [Google Scholar] [CrossRef] [Green Version]
  6. Rogers, B.M.; Soja, A.J.; Goulden, M.L.; Randerson, J.T. Influence of tree species on continental differences in boreal fires and climate feedbacks. Nat. Geosci. 2015, 8, 228–234. [Google Scholar] [CrossRef]
  7. Nelson, R.F.; Ranson, K.J.; Sun, G.; Kimes, D.S.; Kharuk, V.; Montesano, P. Estimating Siberian timber volume using MODIS and ICESat/GLAS. Remote Sens. Environ. 2009, 113, 691–701. [Google Scholar] [CrossRef]
  8. Ju, J.; Masek, J.G. The vegetation greenness trend in Canada and US Alaska from 1984–2012 Landsat data. Remote Sens. Environ. 2016, 176, 1–16. [Google Scholar] [CrossRef]
  9. Randerson, J.T.; Liu, H.; Flanner, M.G.; Chambers, S.D.; Jin, Y.; Hess, P.G.; Pfister, G.; Mack, M.C.; Treseder, K.K.; Welp, L.R.; et al. The impact of boral forest fire on climate warming. Sci. Rep. 2006, 314, 1130–1132. [Google Scholar]
  10. Alonzo, M.; Morton, D.C.; Cook, B.D.; Andersen, H.-E.; Babcock, C.; Pattison, R. Patterns of canopy and surface layer consumption in a boreal forest fire from repeat airborne lidar. Environ. Res. Lett. 2017, 12. [Google Scholar] [CrossRef]
  11. Veraverbeke, S.; Rogers, B.M.; Randerson, J.T. Daily burned area and carbon emissions from boreal fires in Alaska. Biogeosciences 2015, 12, 3579–3601. [Google Scholar] [CrossRef]
  12. Babcock, C.; Finley, A.O.; Cook, B.D.; Weiskittel, A.; Woodall, C.W. Modeling forest biomass and growth: Coupling long-term inventory and LiDAR data. Remote Sens. Environ. 2016, 182, 1–12. [Google Scholar] [CrossRef]
  13. Hopkinson, C.; Chasmer, L.; Gynan, C.; Mahoney, C.; Sitar, M. Multisensor and multispectral lidar characterization and classification of a forest environment. Can. J. Remote Sens. 2016, 42, 501–520. [Google Scholar] [CrossRef]
  14. Reutebuch, S.E.; Andersen, H.E.; McGaughey, R.J. Light detection and ranging (LIDAR): An emerging tool for multiple resource inventory. J. For. 2005, 103, 286–292. [Google Scholar] [CrossRef]
  15. Wulder, M.A.; White, J.C.; Nelson, R.F.; Næsset, E.; Ørka, H.O.; Coops, N.C.; Hilker, T.; Bater, C.W.; Gobakken, T. Lidar sampling for large-area forest characterization: A review. Remote Sens. Environ. 2012, 121, 196–209. [Google Scholar] [CrossRef]
  16. Andersen, H.; Strunk, J.; Temesgen, H.; Atwood, D. Using multilevel remote sensing and ground data to estimate forest bi mass resources in remote regions : A case study in the boreal forests of interior Alaska. Can. J. Remote Sens. 2012, 37, 1–16. [Google Scholar] [CrossRef]
  17. Finley, A.O.; Banerjee, S.; Zhou, Y.; Cook, B.D.; Babcock, C. Joint hierarchical models for sparsely sampled high-dimensional LiDAR and forest variables. Remote Sens. Environ. 2017, 190, 149–161. [Google Scholar] [CrossRef]
  18. Ene, L.T.; Gobakken, T.; Andersen, H.E.; Næsset, E.; Cook, B.D.; Morton, D.C.; Babcock, C.; Nelson, R. Large-area hybrid estimation of aboveground biomass in interior Alaska using airborne laser scanning data. Remote Sens. Environ. 2018, 204, 741–755. [Google Scholar] [CrossRef]
  19. Zald, H.S.J.; Wulder, M.A.; White, J.C.; Hilker, T.; Hermosilla, T.; Hobart, G.W.; Coops, N.C. Integrating Landsat pixel composites and change metrics with lidar plots to predictively map forest structure and aboveground biomass in Saskatchewan, Canada. Remote Sens. Environ. 2016, 176, 188–201. [Google Scholar] [CrossRef]
  20. Feng, Q.; Liu, J.; Gong, J. UAV Remote sensing for urban vegetation mapping using random forest and texture analysis. Remote Sens. 2015, 7, 1074–1094. [Google Scholar] [CrossRef]
  21. Anderson, K.; Gaston, K.J. Lightweight unmanned aerial vehicles will revolutionize spatial ecology. Front. Ecol. Environ. 2013, 11, 138–146. [Google Scholar] [CrossRef]
  22. Puliti, S.; Ørka, H.O.; Gobakken, T.; Næsset, E. Inventory of small forest areas using an unmanned aerial system. Remote Sens. 2015, 7, 9632–9654. [Google Scholar] [CrossRef] [Green Version]
  23. Bonnet, S.; Lisein, J.; Lejeune, P. Comparison of UAS photogrammetric products for tree detection and characterization of coniferous stands detection and characterization of coniferous stands. Int. J. Remote Sens. 2017, 38, 5310–5337. [Google Scholar]
  24. Messinger, M.; Asner, G.P.; Silman, M. Rapid assessments of Amazon forest structure and biomass using small unmanned aerial systems. Remote Sens. 2016, 8, 615. [Google Scholar] [CrossRef]
  25. Cunliffe, A.M.; Brazier, R.E.; Anderson, K. Ultra-fine grain landscape-scale quantification of dryland vegetation structure with drone-acquired structure-from-motion photogrammetry. Remote Sens. Environ. 2016, 183, 129–143. [Google Scholar] [CrossRef] [Green Version]
  26. Dandois, J.P.; Olano, M.; Ellis, E.C. Optimal altitude, overlap, and weather conditions for computer vision UAV estimates of forest structure. Remote Sens. 2015, 7, 13895–13920. [Google Scholar] [CrossRef]
  27. Westoby, M.J.; Brasington, J.; Glasser, N.F.; Hambrey, M.J.; Reynolds, J.M. “Structure-from-Motion”photogrammetry: A low-cost, effective tool for geoscience applications. Geomorphology 2012, 179, 300–314. [Google Scholar] [CrossRef] [Green Version]
  28. Lisein, J.; Pierrot-Deseilligny, M.; Bonnet, S.; Lejeune, P. A photogrammetric workflow for the creation of a forest canopy height model from small unmanned aerial system imagery. Forests 2013, 4, 922–944. [Google Scholar] [CrossRef]
  29. Michez, A.; Piégay, H.; Lisein, J.; Claessens, H.; Lejeune, P. Classification of riparian forest species and health condition using multi-temporal and hyperspatial imagery from unmanned aerial system. Environ. Monit. Assess. 2016, 188, 1–19. [Google Scholar] [CrossRef] [PubMed]
  30. USDA Forest Service. Field Instructions for the Annual Inventory of Alaska. Available online: https://www.fs.fed.us/pnw/rma/fia-topics/documentation/field-manuals/documents/Annual/2017_%20AFSL_FIA_Field_Manual.pdf. (accessed on 3 March 2018).
  31. Pingel, T.J.; Clarke, K.C.; McBride, W.A. An improved simple morphological filter for the terrain classification of airborne LIDAR data. ISPRS J. Photogramm. Remote Sens. 2013, 77, 21–30. [Google Scholar] [CrossRef]
  32. Dalponte, M.; Ørka, H.O.; Ene, L.T.; Gobakken, T.; Næsset, E. Tree crown delineation and tree species classification in boreal forests using hyperspectral and ALS data. Remote Sens. Environ. 2014, 140, 306–317. [Google Scholar] [CrossRef]
  33. Dandois, J.P.; Ellis, E.C. High spatial resolution three-dimensional mapping of vegetation spectral dynamics using computer vision. Remote Sens. Environ. 2013, 136, 259–276. [Google Scholar] [CrossRef]
  34. Lisein, J.; Michez, A.; Claessens, H.; Lejeune, P. Discrimination of deciduous tree species from time series of unmanned aerial system imagery. PLoS ONE 2015, 10, 1–20. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Puliti, S.; Ene, L.T.; Gobakken, T.; Næsset, E. Use of partial-coverage UAV data in sampling for large scale forest inventories. Remote Sens. Environ. 2017, 194, 115–126. [Google Scholar] [CrossRef]
  36. Ferraz, A.; Bretar, F.; Jacquemoud, S.; Gonçalves, G.; Pereira, L.; Tomé, M.; Soares, P. 3-D mapping of a multi-layered Mediterranean forest using ALS data. Remote Sens. Environ. 2012, 121, 210–223. [Google Scholar] [CrossRef]
  37. Breidenbach, J.; Næsset, E.; Lien, V.; Gobakken, T.; Solberg, S. Prediction of species specific forest inventory attributes using a nonparametric semi-individual tree crown approach based on fused airborne laser scanning and multispectral data. Remote Sens. Environ. 2010, 114, 911–924. [Google Scholar] [CrossRef]
  38. Chen, Q.; Baldocchi, D.; Gong, P.; Kelly, M. Isolating individual trees in a savanna woodland using small footprint lidar data. Photogramm. Eng. Remote Sens. 2006, 72, 923–932. [Google Scholar] [CrossRef]
  39. Alonzo, M.; Bookhagen, B.; Roberts, D.A. Urban tree species mapping using hyperspectral and lidar data fusion. Remote Sens. Environ. 2014, 148, 70–83. [Google Scholar] [CrossRef]
  40. Alonzo, M.; Roth, K.; Roberts, D. Identifying Santa Barbara’s urban tree species from AVIRIS imagery using canonical discriminant analysis. Remote Sens. Lett. 2013, 4. [Google Scholar] [CrossRef]
  41. Cook, B.D.; Nelson, R.F.; Middleton, E.M.; Morton, D.C.; McCorkel, J.T.; Masek, J.G.; Ranson, K.J.; Ly, V.; Montesano, P.M. NASA Goddard’s LiDAR, hyperspectral and thermal (G-LiHT) airborne imager. Remote Sens. 2013, 5, 4045–4066. [Google Scholar] [CrossRef]
  42. Kim, S.; McGaughey, R.J.; Andersen, H.E.; Schreuder, G. Tree species differentiation using intensity data derived from leaf-on and leaf-off airborne laser scanner data. Remote Sens. Environ. 2009, 113, 1575–1586. [Google Scholar] [CrossRef]
  43. Holmgren, J.; Persson, Å. Identifying species of individual trees using airborne laser scanner. Remote Sens. Environ. 2004, 90, 415–423. [Google Scholar] [CrossRef]
  44. Korpela, I.; Ole Ørka, H.; Maltamo, M.; Tokola, T.; Hyyppä, J. Tree species classification using airborne LiDAR—Effects of stand and tree parameters, downsizing of training set, intensity normalization, and sensor type. Silva. Fenn. 2010, 44, 319–339. [Google Scholar] [CrossRef]
  45. Dalponte, M.; Bruzzone, L.; Gianelle, D. Tree species classification in the Southern Alps based on the fusion of very high geometrical resolution multispectral/hyperspectral images and LiDAR data. Remote Sens. Environ. 2012, 123, 258–270. [Google Scholar] [CrossRef]
  46. Dalponte, M.; Bruzzone, L.; Gianelle, D. Fusion of hyperspectral and LIDAR remote sensing data for classification of complex forest areas. Geosci. Remote Sens. IEEE Trans. 2008, 46, 1416–1427. [Google Scholar] [CrossRef]
  47. Jones, T.G.; Coops, N.C.; Sharma, T. Assessing the utility of airborne hyperspectral and LiDAR data for species distribution mapping in the coastal Pacific Northwest, Canada. Remote Sens. Environ. 2010, 114, 2841–2852. [Google Scholar] [CrossRef]
  48. Hudak, A.T.; Crookston, N.L.; Evans, J.S.; Falkowski, M.J.; Smith, A.M.S.; Gessler, P.E.; Morgan, P. Regression modeling and mapping of coniferous forest basal area and tree density from discrete-return lidar and multispectral satellite data. Can. J. Remote Sens. 2006, 32, 126–138. [Google Scholar] [CrossRef]
Figure 1. (a) Study location 30 km SW of Fairbanks AK in the Bonanza Creek Experimental Forest; (b) composite unmanned aerial vehicle (UAV) orthophoto from BZ7 with Forest Inventory and Analysis (FIA) subplots (red) and an example single-grid, nadir-view UAV flight plan (white); (c) composite UAV orthophoto from BZ5 with FIA subplots in red and white lines to illustrate an example double-grid, off-nadir flight plan.
Figure 1. (a) Study location 30 km SW of Fairbanks AK in the Bonanza Creek Experimental Forest; (b) composite unmanned aerial vehicle (UAV) orthophoto from BZ7 with Forest Inventory and Analysis (FIA) subplots (red) and an example single-grid, nadir-view UAV flight plan (white); (c) composite UAV orthophoto from BZ5 with FIA subplots in red and white lines to illustrate an example double-grid, off-nadir flight plan.
Forests 09 00119 g001
Figure 2. Digital terrain model (DTM) compared to differential GPS: (a) through (e) are SfM-derived and (f) is the same plot as (e) but showing the DTM from Goddard Lidar Hyperspectral Thermal platform (G-LiHT) data. Red color is to differentiate G-LiHT data from SfM data (in blue). Error is reported as mean absolute error (MAE).
Figure 2. Digital terrain model (DTM) compared to differential GPS: (a) through (e) are SfM-derived and (f) is the same plot as (e) but showing the DTM from Goddard Lidar Hyperspectral Thermal platform (G-LiHT) data. Red color is to differentiate G-LiHT data from SfM data (in blue). Error is reported as mean absolute error (MAE).
Forests 09 00119 g002
Figure 3. Structural parameters for individual trees were extracted from the SfM point cloud using segmentation. (a) Max crown height from SfM compared to field measurements; (b) an example of width at percentile height calculations for each focus species.
Figure 3. Structural parameters for individual trees were extracted from the SfM point cloud using segmentation. (a) Max crown height from SfM compared to field measurements; (b) an example of width at percentile height calculations for each focus species.
Forests 09 00119 g003
Figure 4. Species separability boxplots for the three variables maximizing species separability: (a) max crown height; (b) crown width at 98th percentile height; (c) median blueness.
Figure 4. Species separability boxplots for the three variables maximizing species separability: (a) max crown height; (b) crown width at 98th percentile height; (c) median blueness.
Forests 09 00119 g004
Figure 5. Subplot-aggregated estimates of forest structure. Top row (ac) segment-level analysis followed by aggregation; Bottom row (df), modeling using full subplot point cloud (area-based, no segmentation).
Figure 5. Subplot-aggregated estimates of forest structure. Top row (ac) segment-level analysis followed by aggregation; Bottom row (df), modeling using full subplot point cloud (area-based, no segmentation).
Forests 09 00119 g005
Figure 6. Proportional allocation of tree density (TD), basal area (BA), and aboveground biomass (AGB) by species for each FIA plot (n = 5). Totals were estimated using the best-available method (see Figure 5). At each plot, field proportions are shown on the left and UAV proportions on the right.
Figure 6. Proportional allocation of tree density (TD), basal area (BA), and aboveground biomass (AGB) by species for each FIA plot (n = 5). Totals were estimated using the best-available method (see Figure 5). At each plot, field proportions are shown on the left and UAV proportions on the right.
Forests 09 00119 g006
Table 1. Plot information.
Table 1. Plot information.
PlotElev. Range (m)SlopeAspectForest TypeAGB (Mg∙ha−1)
BZ3238–244LowN/AWhite spruce/birch13
BZ4166–18935%EastWhite spruce/birch/aspen84
BZ5120–121LowN/ABlack spruce1.5
BZ6431–45537%NorthBlack spruce37
BZ7198–20510%EastWhite spruce/birch89
AGB: aboveground biomass.
Table 2. Variables calculated using structure from motion (SfM) point cloud structural and spectral information.
Table 2. Variables calculated using structure from motion (SfM) point cloud structural and spectral information.
Variable NameDescriptionUsed for
ht_maxMax tree heightSpecies classification, DBH model (needleleaf)
ht_med(_sp)Percentile heights of SfM points in crown segmentLeaf type classification
ht_75p(_sp)Crown volume model
ht_90p(_sp)Leaf type classification
ht_98p(_sp)Species classification; Subplot-level TD estimate
ht_mean(_sp)Mean height of SfM points in crown
ht_skw_spSubplot skewness of SfM point height distributionSubplot-level BA estimate
ht_kurt_spSubplot kurtosis of SfM point height distribution
cbhCrown base heightDBH model (broadleaf)
wid_at_medWidths of crown at percentile heightsCrown volume model
wid_at_75pDBH model (needleleaf)
wid_at_90p
wid_at_98pSpecies classification; Leaf type classification
blue_meanmean, median, standard deviation, skewness of [blue − green]/[blue + green]
blue_medSpecies classification
blue_std
blue_skw
green_meanmean, median, standard deviation, skewness of [green − red]/[green + red]
green_med
green_std
green_skw
bright_meanmean, median, standard deviation, skewness of [blue + green + red]
bright_med
bright_std
DBH: diameter at breast height, TD: tree density, BA: basal area.
Table 3. Plot-level species proportional allocations by TD, BA, and AGB, compared to field estimates.
Table 3. Plot-level species proportional allocations by TD, BA, and AGB, compared to field estimates.
TD (trees·ha−1)BA (m2·ha−1)AGB (Mg·ha−1)
FieldUAVErrorerr %FieldUAVErrorerr %FieldUAVErrorerr %
Birch155.8229.974.148%2.03.51.471%7.811.53.747%
Aspen89.9117.327.430%1.71.70.0−2%5.211.46.2118%
White spr.347.5285.1−62.4−18%6.34.8−1.5−24%24.412.2−12.2−50%
Black spr.885.5893.68.11%2.83.10.413%7.77.4−0.3−4%
Total1478.61525.947.33%12.813.10.22%45.142.5−2.6−6%
TD: tree density, BA: basal area, AGB: aboveground biomass, UAV: unmanned aerial vehicle.

Share and Cite

MDPI and ACS Style

Alonzo, M.; Andersen, H.-E.; Morton, D.C.; Cook, B.D. Quantifying Boreal Forest Structure and Composition Using UAV Structure from Motion. Forests 2018, 9, 119. https://doi.org/10.3390/f9030119

AMA Style

Alonzo M, Andersen H-E, Morton DC, Cook BD. Quantifying Boreal Forest Structure and Composition Using UAV Structure from Motion. Forests. 2018; 9(3):119. https://doi.org/10.3390/f9030119

Chicago/Turabian Style

Alonzo, Michael, Hans-Erik Andersen, Douglas C. Morton, and Bruce D. Cook. 2018. "Quantifying Boreal Forest Structure and Composition Using UAV Structure from Motion" Forests 9, no. 3: 119. https://doi.org/10.3390/f9030119

APA Style

Alonzo, M., Andersen, H. -E., Morton, D. C., & Cook, B. D. (2018). Quantifying Boreal Forest Structure and Composition Using UAV Structure from Motion. Forests, 9(3), 119. https://doi.org/10.3390/f9030119

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop