Next Article in Journal
Evaluation of the Water-Storage Capacity of Bryophytes along an Altitudinal Gradient from Temperate Forests to the Alpine Zone
Next Article in Special Issue
Application of UAV Photogrammetric System for Monitoring Ancient Tree Communities in Beijing
Previous Article in Journal
Impact of Non-Timber Forest Product Use on the Tree Community in North-Western Vietnam
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Detection of Coniferous Seedlings in UAV Imagery

1
Department of Geography, 2500 University Dr. NW Calgary, University of Calgary, Calgary, AB T2N 1N4, Canada
2
Northern Forestry Centre, Canadian Forest Service, 5320 122 St. NW Edmonton, Victoria, AB T6H 3S5, Canada
*
Author to whom correspondence should be addressed.
Forests 2018, 9(7), 432; https://doi.org/10.3390/f9070432
Submission received: 26 June 2018 / Revised: 13 July 2018 / Accepted: 16 July 2018 / Published: 18 July 2018
(This article belongs to the Special Issue Forestry Applications of Unmanned Aerial Vehicles (UAVs) 2019)

Abstract

:
Rapid assessment of forest regeneration using unmanned aerial vehicles (UAVs) is likely to decrease the cost of establishment surveys in a variety of resource industries. This research tests the feasibility of using UAVs to rapidly identify coniferous seedlings in replanted forest-harvest areas in Alberta, Canada. In developing our protocols, we gave special consideration to creating a workflow that could perform in an operational context, avoiding comprehensive wall-to-wall surveys and complex photogrammetric processing in favor of an efficient sampling-based approach, consumer-grade cameras, and straightforward image handling. Using simple spectral decision rules from a red, green, and blue (RGB) camera, we documented a seedling detection rate of 75.8 % (n = 149), on the basis of independent test data. While moderate imbalances between the omission and commission errors suggest that our workflow has a tendency to underestimate the seedling density in a harvest block, the plot-level associations with ground surveys were very high (Pearson’s r = 0.98; n = 14). Our results were promising enough to suggest that UAVs can be used to detect coniferous seedlings in an operational capacity with standard RGB cameras alone, although our workflow relies on seasonal leaf-off windows where seedlings are visible and spectrally distinct from their surroundings. In addition, the differential errors between the pine seedlings and spruce seedlings suggest that operational workflows could benefit from multiple decision rules designed to handle diversity in species and other sources of spectral variability.

1. Introduction

The rapid assessment of forest and vegetation structure using unmanned aerial vehicles (UAVs) is likely to decrease the cost of field surveys for a variety of resource industries. UAVs may be particularly well suited for applications in reforestation, because they can collect very high-resolution imagery of small features with great operational flexibility. In Canada, establishment surveys are conducted at every forest-harvest area that has been replanted, to assess the adequacy of spacing, survival, growth, and species composition. For example, the Regeneration Standards of Alberta [1] call for a reconnaissance survey to be conducted three growing seasons after planting, wherein certified forestry technicians walk through the harvest area to visually estimate ‘stocking’, the percentage of 10 m2 cells within the block that contain a live seedling at least 30 cm in height, from an acceptable tree species. If the estimated stocking rate is above 84%, the harvest area passes the establishment assessment. If the stocking rate is below 70%, the harvest area is rejected and must be replanted. If the stocking falls between 70%–84%, the harvest area becomes subject to further assessment [1].
If the condition, minimum height, and species of seedlings within the sample cells used to perform establishment surveys could be derived from UAV imagery, then the reduced need for manual surveys could lead to considerable cost savings. However, it needs to be demonstrated that the seedlings can be automatically or semi-automatically detected from a remote sensing. Coniferous seedlings under five years of age in Canada have crown diameters between 5 and 30 cm; imagery of a very high spatial resolution is required to detect the seedlings of this size. Although automated procedures have been used to detect small tree stumps [2], and to detect weed seedlings on bare soil in an agricultural application [3], we could not find any previous research attempting to identify coniferous seedlings of this age in an automated manner. Early studies of remote sensing applications in forestry [4,5] used manual photointerpretation of large-scale aerial photographs acquired from piloted helicopters. For example, Hall and Aldred [5] detected just 44% of seedlings with crown diameter less than 30 cm using 1:500 color-infrared photography. More recently, Goodbody et al. [6] classified 2.4 cm spatial resolution red, green, and blue (RGB) imagery acquired from a UAV over harvest blocks replanted 5 to 15 years earlier in British Columbia, Canada, and obtained user accuracies for coniferous cover between 35% and 97%. However, no attempt was made to detect individual conifer seedlings or saplings, which in their study area were greater than 1 m in height. These authors acknowledged that the potential to detect all stems using aerial remote-sensing technologies is still limited, and called for further research.
Depending on the spatial resolution of the imagery, seedling detection is akin to individual-tree detection in mature forests, which has been well studied using satellites [7], piloted aircraft [4,8,9,10,11,12,13,14,15] and, more recently, UAVs [16,17,18,19,20,21,22,23]. Given a fixed spatial resolution, the accuracy with which individual trees are detected tends to improve with crown size [11]. For example, using 15 cm resolution multispectral airborne imagery and an image-segmentation algorithm, Hirschmugl et al. [24] obtained 70% accuracy on replanted coniferous trees between 5 and 10 years of age, with an average height of 138 cm. The accuracy levels improved for the crown diameters larger than 30 cm. The UAV-based studies of the tree detection have achieved even higher accuracies. For example, Wallace et al. [16] obtained overall accuracies of 97% (n = 308) using a tree-crown segmentation algorithm applied to a dense (>50 points/m2) light detection and ranging (LiDAR) point cloud. However, the LiDAR sensors come with equipment costs and operational difficulties that some may wish to avoid. Consumer-grade cameras mounted on UAVs provide an attractive low-cost source of vegetation information over disturbed and regenerating forests [25,26].
In this research, we show how millimetric (i.e., spatial resolution on the order of millimeters) UAV imagery can be used to detect coniferous seedlings less than five years old within forest-harvest areas in Alberta, Canada, with good accuracies using simple processing workflows. This study represents the first step towards creating a larger UAV-based stocking-assessment workflow, which, once realized, could extend to the remote assessment of height (from LiDAR or photogrammetric point clouds), species, and condition (from deep-learning algorithms). Achieving this complete workflow would reduce the need for in situ assessments of forest stocking, and provide a powerful new tool for establishment surveys.

2. Materials and Methods

2.1. Study Area

We surveyed two replanted forest-harvest areas located in western Alberta, Canada, for this study (Figure 1). One of the harvest areas was used to develop and train the seedling-detection algorithm, and is hereafter referred to as the ‘training study area’. A second block was used as an independent validation site and is hereafter referred to as the ‘test study area’. The 20.3 ha training study area is managed by Weyerhaeuser Canada (Pembina Forest Management Area), and was replanted approximately four years before our survey, with a mix of lodgepole pine (Pinus contorta) and white spruce (Picea glauca) seedlings. The 3.3 ha test study area was also replanted with a mix of lodgepole pine and white spruce seedlings between three and four years before our field survey, although most individuals we encountered in the field were lodgepole pine. The vegetation surrounding these two harvest areas consist of forests regenerating to mixed stands of aspen (Populus tremuloides), lodgepole pine, and white spruce.

2.2. Reference Data

Field crews surveyed the test study area on 24 April 2014, and the training study area on 1 October 2015. We timed our visits to exploit the leaf-off seasonal windows in the spring and fall when the coniferous seedlings have increased their spectral contrast with their surroundings. We contend that the 1.5 year time gap between the two surveys is irrelevant, given that testing took place independently of the training. The crews located randomly generated plot centers using handheld global positioning system (GPS) units and established 50 m2 circular plots with 3.99 m radii (Figure 2). Biodegradable clay targets or plastic boards were placed in the center of the plot and pinned down with metal spikes as ground control points, whose precise locations were recorded with a survey-grade Trimble real-time kinematic (RTK) global navigation satellite system (GNSS) unit. The plot outlines were marked using chalk or spray paint. The crews then recorded the species and precise location of each seedling inside the plot using the RTK GNSS. A total of 254 seedlings within 14 plots in the training study area, and 149 seedlings within 10 plots in the test study area were surveyed in this manner.

2.3. UAV Imagery

UAV imagery was collected by a flight crew consisting of a pilot and a spotter using a 3DR X8 + octocopter (Figure 3a). Both areas were flown in conjunction with the seedling surveys 24 April 2014 for the test study area and 1 October 2015 for the training study area. Details of the X8+ platform and payload are summarized in Table 1. The platform was modified to carry two cameras simultaneously, one standard RGB camera and a second camera with a modified red-edge (RE) filter. Single-scene images, one RGB and one RE, were acquired for each of the 24 plots during 24 separate flights. The flights took place between 10:00 am and 5:30 pm to cover a variety of lightning conditions. The platform operated a simple automated flight plan, as follows: The X8+ launched, flew to the plot center, and hovered at 15 m above ground level to acquire imagery (single photographs) at a consistent scale (Figure 3b). We installed a LidarLITE laser range finder (vertical accuracy < 2.5 cm) to the UAV, which allowed us to control the altitude of the X8 for imaging. It is important to note that that circular plot did not cover the entire image, but was instead located at the center of each frame. As the UAV hovered directly over each plot prior to image acquisition, the plots were always located very close to the principal point, with the maximum off-nadir angles never exceeding 15 degrees. This reduced the terrain distortion and layover effects. Each flight was less than three minutes in length. The imagery was collected using a sampling approach, avoiding the need for wall-to-wall aerial surveys designed to image the entire harvest area.
We used a Nikon Coolpix A digital camera to collect standard RGB imagery and a modified Canon PowerShot S110 to collect imagery in the RE wavelength. We substituted the internal near-infrared (NIR) filter in the Canon PowerShot S110 with an Event 38 near-infrared green blue (NGB) filter, which pushed the red band response to be centered on 715 nm. The aerial imagery was collected at a low altitude of 15 m, resulting in a ground sampling distance of 3 mm for the Nikon Coolpix A (18.5 mm focal length) and 5 mm for the Canon PowerShot S110 (5.2 mm focal length). The cameras were set to a fixed shutter speed of 1/1250 s with varying apertures, and were manually triggered using a remote control.

2.4. Data Handling and Image Analysis

The seedling locations were exported to a geographic information system (GIS) point layer, visually confirmed using the UAV imagery, and, if required, spatially edited to be centered within each seedling crown in the image. The seedlings ranged in height from 5 cm to 35 cm, and the crown radii ranged between 5 cm and 50 cm. The tall seedlings were those manually planted about four years before our study, while the short seedlings regenerated naturally. The wider crowns (up to 50 cm) corresponded generally to clusters of naturally regenerating seedlings, rather than to individuals. We treated these clusters as single entities for the purpose of this study. It is important to note that the planted seedlings (taller, generally isolated from other individuals) are more important than the naturally regenerating seedlings (shorter, sometimes occurring in clusters) when assessing stocking in the planted harvest areas. Not all of the seedlings will survive to maturity, and the planted seedlings have the best chance. Within a cluster, no more than one seedling will typically survive.
The image analysis workflow is a three-step object-based process consisting of (i) image segmentation, (ii) automated classification using a classification and regression tree (CART) machine-learning algorithm, and (iii) the merging of adjacent image objects classified as ‘seedlings’ into single seedling objects. Priority was placed on creating a workflow that is economic with regards to both the UAV flight time and processing time, so image analyses were conducted on single scenes with minimal preprocessing. The challenge was to identify a classification ruleset that could perform under a variety of target and illumination conditions. We used the CART approach in the training study area to test the importance of the spectral, spatial, and textural variables for this task. On the basis of the results of this testing, we selected a single model for application in the test study area.
Red-edge and RGB imagery were co-registered into 16-bit unsigned raster layers for each sample, using ArcGIS Desktop 10.1 (Figure 4a,b). The images were rectified with first-order polynomial functions, using the seedlings and ground control points as reference marks. It is worth noting that this step—rectifying images to match field data—would not be required in an operational workflow. Additional raster layers were generated from band ratios to gain a spectral contrast between the green seedlings and their non-photosynthetic surroundings (Figure 4c). We used Trimble eCognition Suite 9.1 (www.ecognition.com) to segment the imagery and derive image-object statistics. The input raster for the initial segmentation was a ratio of ratios (red ratio/blue ratio) scaled to the 0–255 interval. The user-defined segmentation parameters were as follows: scale = 50; shape = 0.1; and compactness = 0.3. All of the 48 images (24 RGB and 24 RE) were segmented using the same parameters. We arrived at the final segmentation parameters iteratively through trial and error. Our goal was to develop object primitives that best delineated the seedling edges from their surroundings. The resulting image-objects were then further merged using a homogenous region-growing algorithm, with shape and compactness factors of 0.1 and 0.5, respectively. Once the final image objects were generated for each plot, we assembled a number of attributes for each image object. The final list of spectral, spatial, and textural variables evaluated by the CART approach is summarized in Table 2.

2.5. Machine Learning

Image-object attributes were exported to a table, resulting in a database with 18,905 records (image-objects from all of the 14 plots in the training study area together). Each record was then classified as either seedling or non-seedling using the CART machine-learning algorithm in the Salford Predictive Modeler (SPM v. 70) software (info.salford-systems.com). This algorithm generates a classification decision tree, with rules that can be used with structured query language (SQL) queries or to build a decision tree in eCognition. Three sets of models were evaluated. All three of the sets used the same spatial and textural attributes as the predictors (Table 2), but the spectral attributes varied as follows: (i) RGB-only variables, (ii) RE-only variables, and (iii) RGB-combined-with-RE variables. The adjacent image objects classified as seedling were merged together using a GIS ‘dissolve’ function. The detection accuracy of each model was assessed using a 10-fold cross validation procedure. The mean overall seedling classification accuracy was obtained by each model across all of the 10 trials and was assigned as a measure of global accuracy. The most accurate model was applied to the test study area, which served as an independent validation of our workflow.

3. Results and Discussion

3.1. Model Selection

The overall classification accuracy of all of the image-objects in the training study area was 96%, 97%, and 97% for the RGB-only variables, RE-only variables, and RGB and RE variables, respectively. It should be noted that the overall accuracy reported here (raw agreement) is a high-level accuracy statistic based on a disproportionately small number of seedling objects (2420) to non-seedling objects (18,663). We report on more detailed error analytics associated with the test dataset below. As the RE-only and RGB and RE models did not result in significant increases in performance (1%), we chose to use the RGB-only model for parsimony.
The final CART decision tree was pruned to a simple two-rule model based on just two spectral vegetation indices, the green-red difference index and the blue-green difference index. We found that reasonable classification models could be generated using an RGB camera alone, and—more importantly—that one classification model could be used to detect coniferous seedling crowns across many sample plots in our study sites imaged under different lighting conditions. An example test-plot classification is shown in Figure 5.

3.2. Detection Accuracy, Error Patterns, and Density Estimates

To assess the detection accuracy in the test area, we considered a reference seedling detected (i.e., a true positive) if its corresponding geolocation point was inside a seedling object; otherwise, we considered the image object containing the point to be a false negative. Likewise, the seedling objects not containing a seedling geolocation point were considered false positives, and the rest of the non-seedling image objects were accordingly considered true negatives. The overall detection rate (sensitivity) for conifer seedlings in the independent test dataset was 75.8%: 113 out of the 149 seedlings surveyed in the test site were detected (Table 3). The classification model had a commission error (false positive) rate of 12.4% and an omission error (false negative) rate of 24.2%. The moderate imbalance between the omission and commission errors suggests that our workflow tends to underestimate the seedling density. This is understandable given the small size of the target seedlings and the complex environmental conditions in which they are found. The overall Kappa coefficient was 0.810 and the area under the receiver operating characteristic (ROC) curve was 0.93.
We noted differential errors between the pine seedlings (86% detection rate, n = 124) and spruce seedlings (24% detection rate, n = 25), which were relatively rare in our test study area. This issue could likely be addressed using multiple classification models (one per species), although we did not attempt it here.
Plot-level associations between the CART-predicted stem numbers and field observations produced a Pearson’s correlation coefficient of r = 0.98 in the test dataset (Figure 6). The plot accuracies in the test dataset ranged between 62.5% and 100%, with a mean accuracy of 86.2%. Once again, we observed a tendency of our workflow to underestimate when large numbers of seedlings are present (top end of Figure 6). This is not a critical problem, although, as plots with large numbers of small seedlings would be considered fully stocked, a moderate underestimation bias in these conditions can be tolerated in practical applications.
Expressing the stem numbers derived from remote sensing on a per-hectare basis produced an estimated seedling density of 2160 stems/ha, versus 2500 stems/ha from the reference data. This represents an underestimation of 320 stems/ha (13.6%), which is consistent with the UAV data’s tendency to slightly underestimate seedling counts, noted previously. Puliti et al. [20] estimated the stem numbers of mature trees in 38 fixed-area plots in Norway using photogrammetric data from UAVs, and reported root-mean-square errors of 538 stems/ha (39.2%). While less accurate than our results, the area-based regression analyses used by the authors is quite different than the object-detection approach used here.
We could not find any published literature on the automated classification of very small seedlings (less than 5 years of age) against which to compare our results. Hall and Aldred [5] detected just 44% of the seedlings with a smaller than 30 cm crown diameter using the manual interpretation of 1:500 scale color-infrared imagery. Our results are slightly less accurate than those of Sperlich et al. [32], who reported an 88% overall accuracy from the photogrammetric detection of 219 mature tree crowns. Not surprisingly, our accuracy is lower than that in studies using UAVs to detect mature trees in plantation contexts, which present a simpler classification problem, in which individuals are spatially and structurally homogenous. For example, Torres-Sánchez et al. [21] reported a 95% accuracy using photogrammetric data on 54 olive trees, and Wallace et al. [17] reported a 98% detection rate of 308 eucalyptus plants in rows using UAV LiDAR data. Our results exceed those reported by Ke and Quackenbush [13] (70% user and producer accuracy) in their classification of individual trees in single-scene forest stands from piloted aerial imagery, and those of Chisolm et al. [33] (73% overall accuracy) in a below-canopy LiDAR survey of mature trees.

3.3. Challenges with Small Seedlings and Clusters of Seedlings

Many of the omission errors we encountered arose from image-segmentation challenges. Small individuals (down to 10 cm height with 5 cm crown diameter) and clusters of seedlings with contiguous crown types posed problems for our workflow (Figure 7). While millimetric spatial resolution helps identify fine features in our environment, it comes at the cost of spatial and spectral heterogeneity. While the CART algorithm is efficient at negotiating this heterogeneity, it can do so at the expense of specificity. To guard against this tendency, we pruned the decision tree to just two rules.

3.4. A Sample-Based Approach to Silvicultural Surveys

A significant portion of our study was devoted to working with a sample-based survey approach, rather than conventional wall-to-wall mapping: as is common in remote sensing. This approach achieves significant time savings in terms of field data collection and avoids additional photogrammetric and orthomosaic processing costs. We estimate that a standard wall-to-wall UAV survey over the training site would require a flight time of 33 minutes, during which the platform would fly 9.8 km (Figure 8a). Alternatively, a sample-based survey of the same area could take place in under six minutes and cover just 2.8 km, with a time savings of 81% (Figure 8b). While we acknowledge that wall-to-wall surveys of forest-harvest areas the size of the ones we worked in are currently possible with UAV platforms, and that wall-to-wall surveys (i.e. census) may provide incremental benefits to sampling, we contend that sample-based flight planning is currently underutilized by the UAV community, and may be crucial to developing operational workflows.
Our workflow is based on spectral variables from a standard RGB camera alone, with no geometric pre-processing and no secondary photogrammetric products. This approach has its pros and cons. For example, the exclusion of spatial and structural variables from our workflow limits our approach to applications where seedlings are spectrally distinct from their surroundings; hence, our requirement for seasonal leaf-off conditions. The main benefit is a streamlined workflow that simplifies the survey and processing procedures. Nex and Remondino [34] estimate that the placement of ground control points and photogrammetric processing constitute 55% of the time effort required to perform a photogrammetric UAV survey, compared to 25% for flight planning and image acquisition. Alternative workflows to ours that incorporate multiple datasets often require precise geometric integration using ground-control points, which must be laid out and surveyed prior to image acquisition. Photogrammetric processing also adds significant computational costs, and may introduce ground-object distortions [35], moaicking artefacts [36], and radiometric inconsistencies. For example, Borgogno-Mondino et al. [37] explain how color-balancing algorithms embedded in commercial image-processing packages can degrade the radiometric quality of the resulting orthomosaics and limit the effectiveness of derived spectral indices. These issues will certainly diminish in time, given the rapid advancement in direct georeferencing technology [34,38], alternative spatial processing routines [39], and integrated sensor systems. As a result, we expect future studies to find incremental value in photogrammetric data for forest-regeneration surveys, as other authors have reported with the detection of mature trees [2,21,32]. In the meantime, the benefits of the simplified workflows for operational projects are substantial.
Despite these benefits, our simplified sample-based workflow also contains a number of drawbacks. The lack of geometric correction means that images contain relief distortion and optical lens distortion, which introduce variability into the seedlings’ appearance. Excluding the geometric correction from the workflow also introduced variability into the spatial resolution of the resulting images, with the ground-sample distance (GSD) being just a simple function of the flying altitude and camera lens focal length. We reduced this scale effect by outfitting our UAV with a laser rangefinder, which allowed us to hover at exactly 15 m above ground level during imaging. With this, we ensured a common GSD of 3 mm at nadir (5 mm for the NIR camera), which became 3.2 mm at the edge of the circular plot over flat terrain. Even in slightly uneven ground (slopes in our study area did not exceed 10%), the GSD at the low side of a sloping lot did not exceed 3.4 mm. A standard UAV equipped with a conventional GNSS and barometer would be unable to acquire images in such a consistent manner. While the seedlings at the edge of plots appeared up to 30% smaller than those at the center, this effect is unlikely to decrease the detection rate, except perhaps for very small seedlings arising from natural regeneration. The occlusion by taller vegetation could also impair the detection of those seedlings close to the edge of the plots. Once again, though, this is unlikely to decrease the detection, because most shrubs were devoid of leaves at the acquisition date, and there were no saplings or mature conifer trees within the plots. Additional challenges in our workflow arose from the substantial spectral variability caused by the changing illumination conditions during our flights. While this variability could be reduced with the use of an integrated irradiance sensor, operational workflows could benefit from several classification models designed to account for diversity in seedling species, radiometric conditions, and surrounding vegetation, as well as for other sources of spectral variability. We encourage future researchers to assess the value of incorporating scene-level variables that categorize samples on the basis of brightness, time of day, latitude, greenness, and other factors.
Finally, we note that the Regeneration Standards of Alberta [1] call for between 2.77 and 12.4 sample plots per hectare, depending the size of the harvest area being assessed. In a real-world scenario, this means that we would need to increase our sampling intensity substantially (4 or 5 fold) over the design used in this research. However, it was not our intention to conduct actual stocking assessments, but rather to create and evaluate a workflow that could perform efficiently in this respect.

4. Conclusions

In this study, we assessed the capacity of optical photography from an unmanned aerial vehicle (UAV), to perform coniferous-seedling detection in an object-based environment. The 75.8% overall detection rate of the ground-surveyed seedlings in an independent test site (n = 149) demonstrates the utility of our approach. Error analytics revealed a slight tendency to underestimate seedlings, although the plot-level associations with ground surveys were very high (r = 0.98, n = 14). Red-edge imagery offered no significant advantage over the data from a standard RGB camera, and our final decision tree was comprised of a simple two-rule model based on just two spectral vegetation indices, the Green-Red Difference Index and the Blue-Green Difference Index. We found spatial and textural variables to be unnecessary for identifying coniferous seedlings in the conditions we assessed. Our workflow relies on seasonal leaf-off conditions when grasses are senesced and seedlings are spectrally distinct from their surroundings, and it would not be expected to perform with the same efficiency at other times of the year, or with deciduous seedlings.
The seedling surveys conducted with UAVs are feasible and efficient, but further research is required. For example, we expect that the increased variability encountered under operational conditions will require the complementary use of several models or the application of more sophisticated machine-learning approaches. We encourage other researchers to explore the detectability of other seedling species in new environments, and at different times of the year. Also, a full stocking-assessment workflow would require delineated seedlings to be assessed for other attributes, including height, species, and condition: challenges that will probably require enhanced spectral information and 3D data collected using light detection, ranging or photogrammetry.
The trend towards quantified vegetation surveys at this level of detail is a promising development, both for forest management and in the larger context of restoration. Perhaps the most useful area for the future development of the use of UAVs in forest management is repeatability. This is a progressive approach, as the future use of UAVs may not depend as much on the correlation of the metrics derived from the data collected with unmanned aerial vehicles to field observations as on simple data and mensuration consistency. Wallace et al. [18] have been pioneers in this regard, publishing a study focused on the repeatability of the measurements taken using unmanned aerial vehicles. We encourage other researchers to assist in the development and reporting of forest mensuration workflows based on remote sensing to establish a body of literature that provides a foundation from which the consistency and repeatability of these novel techniques can be assessed.

Author Contributions

C.F. conceived of the study, performed the field work, operated the UAV, completed the data analysis, and wrote the first draft of the manuscript. C.F., G.J.M., and G.C. designed the experiments and interpreted the results. G.C. and G.J.M. edited the manuscript and prepared the final draft for publication.

Acknowledgments

This research is part of the Boreal Ecosystem Recovery and Assessment (BERA) project (www.bera-project.org), and was supported by a Natural Sciences and Engineering Research Council of Canada Collaborative Research and Development, Grant (CRDPJ 469943-14), in conjunction with Alberta-Pacific Forest Industries, Cenovus Energy, ConocoPhillips Canada, and Devon Canada Corporation. The BERA funding facilitated both the research activities and subsequent open-access publishing of this work. Jamie Miller (Weyerhaeuser Grand Prairie) and Victor Fobert (Weyerhaeuser Pembina) provided access to the forest-harvest blocks used for this study. We also thank Jennifer Thomas for her editorial review. The comments of three anonymous reviewers helped us improve the manuscript.

Conflicts of Interest

The authors declare no conflict of interest. The funding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Alberta Environment and Sustainable Resource Development (AESRD). Regeneration Standards of Alberta; Department of Environment and Sustainable Resource Development: Edmonton, AB, Canada, 2015. [Google Scholar]
  2. Puliti, S.; Talbot, B.; Astrup, R. Tree-stump detection, segmentation, classification, and measurement using unmanned aerial vehicle (UAV) imagery. Forests 2018, 9, 120. [Google Scholar] [CrossRef]
  3. Peña, J.M.; Torres-Sanchez, J.; Serrano-Perez, A.; de Castro, A.I.; Lopez Granados, F. Quantifying efficacy and limits of unmanned aerial vehicle (UAV) technology for weed seedling detection as affected by sensor resolution. Sensors 2015, 15, 5609–5626. [Google Scholar] [CrossRef] [PubMed]
  4. Kirby, C.L. A Camera and Interpretation System for Assessment of Forest Regeneration; Environment Canada, Canadian Forestry Service, Northern Forest Research Centre: Edmonton, AB, Canada, 1980. [Google Scholar]
  5. Hall, R.J.; Aldred, A.H. Forest regeneration appraisal with large-scale aerial photographs. For. Chron. 1992, 68, 142–150. [Google Scholar] [CrossRef] [Green Version]
  6. Goodbody, T.R.; Coops, N.C.; Hermosilla, T.; Tompalski, P.; Crawford, P. Assessing the status of forest regeneration using digital aerial photogrammetry and unmanned aerial systems. Int. J. Remote Sens. 2017, 38, 1–19. [Google Scholar] [CrossRef]
  7. Agarwal, S.; Vailshery, L.S.; Jaganmohan, M.; Nagendra, H. Mapping urban tree species using very high resolution satellite imagery: Comparing pixel-based and object-based approaches. ISPRS Int. J. Geo-Inf. 2013, 2, 220–236. [Google Scholar] [CrossRef]
  8. Gougeon, F.A. A crown-following approach to the automatic delineation of individual tree crowns in high spatial resolution aerial images. Can. J. Remote Sens. 1995, 21, 274–284. [Google Scholar] [CrossRef]
  9. Wulder, M.; Niemann, K.O.; Goodenough, D.G. Local maximum filtering for the extraction of tree locations and basal area from high spatial resolution imagery. Remote Sens. Environ. 2000, 73, 103–114. [Google Scholar] [CrossRef]
  10. Franklin, S.E.; Hall, R.J.; Smith, L.; Gerylo, G.R. Discrimination of conifer height, age and crown closure classes using Landsat-5 TM imagery in the Canadian Northwest Territories. Int. J. Remote Sens. 2003, 24, 1823–1834. [Google Scholar] [CrossRef]
  11. Pouliot, D.A.; King, D.J.; Pitt, D.G. Development and evaluation of an automated tree detection-delineation algorithm for monitoring regenerating coniferous forests. Can. J. For. Res. 2005, 35, 2332–2345. [Google Scholar] [CrossRef]
  12. Wolf, B.M.; Heipke, C. Automatic extraction and delineation of single trees from remote sensing data. Mach. Vision Appl. 2007, 18, 317–330. [Google Scholar] [CrossRef]
  13. Ke, Y.H.; Quackenbush, L.J. A comparison of three methods for automatic tree crown detection and delineation from high spatial resolution imagery. Int. J. Remote Sens. 2011, 32, 3625–3647. [Google Scholar] [CrossRef]
  14. Leckie, D.G.; Gougeon, F.; McQueen, R.; Oddleifson, K.; Hughes, N.; Walsworth, N.; Gray, S. Production of a large-area individual tree species map for forest inventory in a complex forest setting and lessons learned. Can. J. Remote Sens. 2017, 43, 140–167. [Google Scholar] [CrossRef]
  15. Panagiotidis, D.; Abdollahnejad, A.; Surovy, P.; Chiteculo, V. Determining tree height and crown diameter from high-resolution UAV imagery. Int. J. Remote Sens. 2017, 38, 2392–2410. [Google Scholar] [CrossRef]
  16. Fritz, A.; Kattenborn, T.; Koch, B. UAV-based photogrammetric point clouds—Tree stem mapping in open stands in comparison to terrestrial laser scanner point clouds. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, UAV-g2013, Rostock, Germany, 4–6 September 2013; pp. 141–146. [Google Scholar]
  17. Wallace, L.; Lucieer, A.; Watson, C.S. Evaluating tree detection and segmentation routines on very high resolution UAV LiDAR. IEEE Trans. Geosci. Remote Sens. 2014, 52, 7169–7628. [Google Scholar] [CrossRef]
  18. Wallace, L.; Musk, R.; Lucieer, A. An assessment of the repeatability of automatic forest inventory metrics derived from UAV-borne laser scanning data. IEEE Trans. Geosci. Remote Sens. 2014, 52, 7160–7169. [Google Scholar] [CrossRef]
  19. Jaskierniak, D.; Kuczera, G.; Benyon, R.; Wallace, L. Using tree detection algorithms to predict stand sapwood area, basal area and stocking density in Eucalyptus regnans forest. Remote Sens. 2015, 7, 7298–7323. [Google Scholar] [CrossRef]
  20. Puliti, S.; Orka, H.O.; Gobakken, T.; Naesset, E. Inventory of small forest areas using an unmanned aerial system. Remote Sens. 2015, 7, 9632–9654. [Google Scholar] [CrossRef] [Green Version]
  21. Torres-Sanchez, J.; Lopez-Granados, F.; Serrano, N.; Arquero, O.; Pena, J.M. High-throughput 3-D monitoring of agricultural-tree plantations with unmanned aerial vehicle (UAV) technology. PLoS ONE 2015, 10, e0130479. [Google Scholar] [CrossRef] [PubMed]
  22. Kang, J.; Wang, L.; Chen, F.; Niu, Z. Identifying tree crown areas in undulating eucalyptus plantations using JSEG multi-scale segmentation and unmanned aerial vehicle near-infrared imagery. Int. J. Remote Sens. 2017, 38, 2296–2312. [Google Scholar] [CrossRef]
  23. Nevalainen, O.; Honkavaara, E.; Tuominen, S.; Viljanen, N.; Hakala, T.; Yu, X.W.; Hyyppa, J.; Saari, H.; Polonen, I.; Imai, N.N.; et al. Individual tree detection and classification with UAV-based photogrammetric point clouds and hyperspectral imaging. Remote Sens. 2017, 9, 185. [Google Scholar] [CrossRef]
  24. Hirschmugl, M.; Ofner, M.; Raggam, J.; Schardt, M. Single tree detection in very high resolution remote sensing data. Remote Sens. Environ. 2007, 110, 533–544. [Google Scholar] [CrossRef]
  25. Chen, S.J.; McDermid, G.J.; Castilla, G.; Linke, J. Measuring vegetation height in linear disturbances in the boreal forest with UAV photogrammetry. Remote Sens. 2017, 9, 1257. [Google Scholar] [CrossRef]
  26. Hird, J.N.; Montaghi, A.; McDermid, G.J.; Kariyeva, J.; Moorman, B.J.; Nielsen, S.E.; McIntosh, A.C.S. Use of unmanned aerial vehicles for monitoring recovery of forest vegetation on petroleum well sites. Remote Sens. 2017, 9, 413. [Google Scholar] [CrossRef]
  27. Trimble eCognition Developer 8.8 Reference Book, Trimble Germany GmbH: Munich, Germany, 2015.
  28. Woebbecke, D.M.; Meyer, G.E.; Vonbargen, K.; Mortensen, D.A. Color indexes for weed identification under various soil, residue, and lighting conditions. Trans. ASAE 1995, 38, 259–269. [Google Scholar] [CrossRef]
  29. Loris, V.; Damiano, G. Mapping the green herbage ratio of grasslands using both aerial and satellite-derived spectral reflectance. Agric. Ecosyst. Environ. 2006, 115, 141–149. [Google Scholar] [CrossRef]
  30. Jannoura, R.; Brinkmann, K.; Uteau, D.; Bruns, C.; Joergensen, R.G. Monitoring of crop biomass using true colour aerial photographs taken from a remote controlled hexacopter. Biosys. Eng. 2015, 2015. 129, 341–351. [Google Scholar] [CrossRef]
  31. Jensen, J.R. Remote Sensing of the Environment: An Earth Resource Perspective; Prentice Hall: Upper Saddle River, NJ, USA, 2007. [Google Scholar]
  32. Sperlich, M.; Kattenborn, T.; Koch, B. Potential of unmanned aerial vehicle based photogrammetric point clouds for automatic single tree detection. Gemeinsame Tagung 2015, 1–6. [Google Scholar]
  33. Chisholm, R.A.; Cui, J.; Lum, S.K.Y.; Chen, B.M. UAV LiDAR for below-canopy forest surveys. J. Unmanned Veh. Syst. 2013, 1, 61–68. [Google Scholar] [CrossRef]
  34. Nex, F.; Remondino, F. UAV for 3D mapping applications: A. review. Appl. Geomatics 2014, 6, 1–15. [Google Scholar] [CrossRef]
  35. Rosnell, T.; Honkavaara, E. Point cloud generation from aerial image data acquired by a quadrocopter type micro unmanned aerial vehicle and a digital still camera. Sensors 2012, 12, 453–480. [Google Scholar] [CrossRef] [PubMed]
  36. Lin, C.H.; Chen, B.H.; Lin, B.Y.; Chou, H.S. Blending zone determination for aerial orthimage mosaicking. ISPRS J. Photogramm. Remote Sens. 2016, 119, 426–436. [Google Scholar] [CrossRef]
  37. Borgogno-Mondino, E.; Lessio, A.; Tarricone, L.; Novello, V.; de Palma, L. A comparison between multispectral aerial and satellite imagery in precision viticulture. Prec. Agric. 2018, 19, 195–217. [Google Scholar] [CrossRef]
  38. Klingbeil, L.; Eling, C.; Heinz, E.; Wieland, M.; Kuhlmann, H. Direct georeferencing for portable mapping systems: In the air and on the ground. J. Surv. Eng. 2017, 143, 04017010. [Google Scholar] [CrossRef]
  39. Ribeiro-Gomes, K.; Hernandez-Lopez, D.; Ballesteros, R.; Moreno, M.A. Approximate georeferencing and automatic blurred image detection to reduce the costs of uav use in environmental and agricultural applications. Biosyst. Eng. 2016, 151, 308–327. [Google Scholar] [CrossRef]
Figure 1. Location of two replanted forest-harvest areas surveyed for this research in southwestern Alberta, Canada. Sample plots within the harvest areas are depicted with the red dots.
Figure 1. Location of two replanted forest-harvest areas surveyed for this research in southwestern Alberta, Canada. Sample plots within the harvest areas are depicted with the red dots.
Forests 09 00432 g001
Figure 2. A sample plot, outlined with paint, measured 3.99 m in radius. The coordinates of both ground control point/center point and seedlings were measured using a survey grade global navigation satellite system (GNSS) unit. Note that the image has been clipped to just the plot extent for the purpose of display.
Figure 2. A sample plot, outlined with paint, measured 3.99 m in radius. The coordinates of both ground control point/center point and seedlings were measured using a survey grade global navigation satellite system (GNSS) unit. Note that the image has been clipped to just the plot extent for the purpose of display.
Forests 09 00432 g002
Figure 3. (a) The 3DR X8+ unmanned aerial vehicle (UAV) can collect density samples at a rate of 45 s/ha. (b) The sample-based flight plan used for this study enables a rapid assessment of large forestry blocks. The UAV moves to a sample plot waypoint, descends to a 15 m altitude above ground level (AGL), captures an image, ascends, and continues to the next waypoint.
Figure 3. (a) The 3DR X8+ unmanned aerial vehicle (UAV) can collect density samples at a rate of 45 s/ha. (b) The sample-based flight plan used for this study enables a rapid assessment of large forestry blocks. The UAV moves to a sample plot waypoint, descends to a 15 m altitude above ground level (AGL), captures an image, ascends, and continues to the next waypoint.
Forests 09 00432 g003
Figure 4. Detail of the UAV image from a sample plot. Left to right: (a) red, green, and blue (RGB) (3 mm spatial resolution), (b) red-edge (RE) (5 mm spatial resolution), and (c) red ratio/blue ratio (from the RGB image).
Figure 4. Detail of the UAV image from a sample plot. Left to right: (a) red, green, and blue (RGB) (3 mm spatial resolution), (b) red-edge (RE) (5 mm spatial resolution), and (c) red ratio/blue ratio (from the RGB image).
Forests 09 00432 g004
Figure 5. GNSS-surveyed reference seedlings (a) were surveyed in the field by trained personnel. UAV imagery was segmented into image objects (b) and subject to a decision-tree classification based on spectral indices from RGB imagery. The final output (c) delineates individual seedlings, whose accuracy was checked against the reference data.
Figure 5. GNSS-surveyed reference seedlings (a) were surveyed in the field by trained personnel. UAV imagery was segmented into image objects (b) and subject to a decision-tree classification based on spectral indices from RGB imagery. The final output (c) delineates individual seedlings, whose accuracy was checked against the reference data.
Forests 09 00432 g005
Figure 6. Association between classified seedling counts and field reference observations in the test-case study.
Figure 6. Association between classified seedling counts and field reference observations in the test-case study.
Forests 09 00432 g006
Figure 7. Despite using images with ultra-high spatial resolution, segmentation routines had difficulty detecting very small seedlings (a) or delineating groups of seedlings with contiguous crowns (b). Normally, individuals within clusters could only be delineated when small gaps occurred between crowns (c).
Figure 7. Despite using images with ultra-high spatial resolution, segmentation routines had difficulty detecting very small seedlings (a) or delineating groups of seedlings with contiguous crowns (b). Normally, individuals within clusters could only be delineated when small gaps occurred between crowns (c).
Forests 09 00432 g007
Figure 8. The flight plan for a standard wall-to-wall survey (a) is almost six times longer in duration and three times longer in distance than the sample-based approach (b) used in this research.
Figure 8. The flight plan for a standard wall-to-wall survey (a) is almost six times longer in duration and three times longer in distance than the sample-based approach (b) used in this research.
Forests 09 00432 g008
Table 1. Unmanned aerial vehicle (UAV) and camera specifications. RGB—red green blue. NIR—near-infrared; CMOS—complementary metal-oxide-semiconductor; NDVI—normalized difference vegetation index.
Table 1. Unmanned aerial vehicle (UAV) and camera specifications. RGB—red green blue. NIR—near-infrared; CMOS—complementary metal-oxide-semiconductor; NDVI—normalized difference vegetation index.
UAV Specifications: 3DR X8+
DescriptionOctocopter
Vehicle Dimensions35 cm × 50 cm × 21 cm
Battery4S 14 8V 10.000 mAh 10C
Vehicle Weight with Battery2.56 kg
Platform Estimated Flight Time15 min
Maximum Speed
Ranging Device
96 km/h
LidarLITE laser range finder
Payload SpecificationRGBNIR
Camera Model Nikon Coolpix ACanon PowerShot S110
Weight299 g198 g
Image Size (megapixels)16 MP12.1 MP
Ground Sampling Distance at Nadir3 mm5 mm
Image Dimensions (pixels)(4928 × 3264)(4000 × 3000)
Effective Field of View for Sample Plot30°30°
Focal Length18.5 mm5.2 mm
Aspect Ratio3:24:3
FilterStockEvent 38 NDVI
Bit Depth2424
Trigger ModeShutter PriorityShutter Priority
ISO500500
Shutter Speed1/12501/1250
Maximum Aperturef/4f/3.5
Focus ModeCenter weightedCenter weighted
Sensor TypeCMOSCMOS
Table 2. Spectral, spatial, and textural attributes of image objects used for classification and regression tree (CART) modelling.
Table 2. Spectral, spatial, and textural attributes of image objects used for classification and regression tree (CART) modelling.
Metric TypeDescriptionSource
Spectral
BrightnessSum of all mean layer values within image object divided by number of layers[27]
Border ContrastRelative difference between brightness and mean intensity of image layers[27]
BGVI ( M e a n   B l u e   D N M e a n   G r e e n   D N ) [27]
EGI ( 2 M e a n   G r e e n   D N ) M e a n   R e d   D N M e a n   B l u e   D N [28]
GRDI ( M e a n   G r e e n   D N M e a n   R e d   D N ) [29]
NBGVI ( M e a n   B l u e   D N M e a n   G r e e n   D N ) ( M e a n   B l u e   D N + M e a n   G r e e n   D N ) [30]
NDVI M e a n   N I R R   D N M e a n   R e d   D N M e a n   N I R R   D N + M e a n   R e d   D N [30]
NEGI ( 2     M e a n   G r e e n   D N ) M e a n   R e d   D N M e a n   B l u e   D N ( 2     M e a n   G r e e n   D N ) + M e a n   R e d   D N + M e a n   B l u e   D N [31]
NGRDI ( M e a n   G r e e n   D N M e a n   R e d   D N ) ( M e a n   G r e e n   D N + M e a n   R e d   D N ) . [28]
NGBDI ( M e a n   G r e e n   D N M e a n   B l u e   D N ) ( M e a n   G r e e n   D N + M e a n   B l u e   D N ) [28]
R x ¯ , G x ¯ , B x ¯ , N I R R x ¯ , N I R G x ¯ , N I R B x ¯ R x ¯ =   R e d   D N n , etc.[27]
R r a t i o , G r a t i o , B r a t i o , N I R R r a t i o , N I R G r a t i o , N I R B r a t i o R r a t i o =   M e a n   R e d   D N M e a n   R e d   D N + M e a n   G r e e n   D N + M e a n   B l u e   D N , etc.[27]
R s t d , G s t d , B s t d , N I R R s t d , N I R G s t d , N I R B s t d Standard deviation of digital number (DN) values within image object[27]
Spatial
Border IndexDescribes the ratio between the border length of an image object and the smallest enclosing rectangle[27]
AsymmetryDescribes a length/width ratio between the image object and an approximated ellipse[27]
Compactness (pixel)The product of the length and width divided by the number of pixels[27]
Compactness (polygon)Ratio of the area of the image object to the area of a circle with the same perimeter[27]
PerimeterPixel sum of the length of all edges in an image object[27]
Pixel AreaNumber of pixels contained in an image object[27]
RoundnessDifference between the radius of the smallest enclosing and largest enclosed ellipse[27]
Volume (voxel)The number of volume elements (voxels) contained in an image object[27]
Textural
GLCM ContrastGrey level co-occurrence matrix contrast[27]
GLCM HomogeneityGrey level co-occurrence matrix homogeneity[27]
Table 3. Confusion matrix for the test data (independent validation) set. Interior matrix cells indicate both correctly classified objects (true positives [TP] and true negatives [TN]) and errors (false positives [FP] and false negatives [FN]). Square brackets in the interior cells break down the seedling reference data by species [pine/spruce].
Table 3. Confusion matrix for the test data (independent validation) set. Interior matrix cells indicate both correctly classified objects (true positives [TP] and true negatives [TN]) and errors (false positives [FP] and false negatives [FN]). Square brackets in the interior cells break down the seedling reference data by species [pine/spruce].
Reference
SeedlingNon-Seedling
CARTSeedling113 (TP)
[107/6]
16 (FP)
Non-Seedling36 (FN)
[17/19]
8219 (TN)
Sensitivity =
75.8%
Specificity =
99.7%
Seedling Commission Errors (false-positive rate) = 12.4%; Seedling Omission Errors; (false-negative rate) = 24.2%; Overall Accuracy = 99.4%; Kappa = 0.810.

Share and Cite

MDPI and ACS Style

Feduck, C.; McDermid, G.J.; Castilla, G. Detection of Coniferous Seedlings in UAV Imagery. Forests 2018, 9, 432. https://doi.org/10.3390/f9070432

AMA Style

Feduck C, McDermid GJ, Castilla G. Detection of Coniferous Seedlings in UAV Imagery. Forests. 2018; 9(7):432. https://doi.org/10.3390/f9070432

Chicago/Turabian Style

Feduck, Corey, Gregory J. McDermid, and Guillermo Castilla. 2018. "Detection of Coniferous Seedlings in UAV Imagery" Forests 9, no. 7: 432. https://doi.org/10.3390/f9070432

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop