Next Article in Journal
Combined Use of Airborne Lidar and DBInSAR Data to Estimate LAI in Temperate Mixed Forests
Previous Article in Journal
Atmospheric Correction and Vicarious Calibration of Oceansat-1 Ocean Color Monitor (OCM) Data in Coastal Case 2 Waters
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Individual Urban Tree Species Classification Using Very High Spatial Resolution Airborne Multi-Spectral Imagery Using Longitudinal Profiles

Department of Earth and Space Science and Engineering, York University, 4700 Keele St., Toronto, ON M3J 1P3, Canada
*
Author to whom correspondence should be addressed.
Remote Sens. 2012, 4(6), 1741-1757; https://doi.org/10.3390/rs4061741
Submission received: 10 April 2012 / Revised: 5 June 2012 / Accepted: 5 June 2012 / Published: 11 June 2012

Abstract

:
Individual tree species identification is important for urban forest inventory and ecology management. Recent advances in remote sensing technologies facilitate more detailed estimation of individual urban tree characteristics. This study presents an approach to improve the classification of individual tree species via longitudinal profiles from very high spatial resolution airborne imagery. The longitudinal profiles represent the side view tree shape, which play a very important role in individual tree species on-site identification. Decision tree classification was employed to conduct the final classification result. Using this profile approach, six major species (Maple, Ash, Birch, Oak, Spruce, Pine) of trees on the York University (Ontario, Canada) campus were successfully identified. Two decision trees were constructed, one knowledge-based and one derived from gain ratio criteria. The classification accuracy achieved were 84% and 86%, respectively.

1. Introduction

In recent years, researchers have become more aware of the importance of detailed land characteristics in urban systems [1]. Trees comprise a critical component of urban ecosystems and directly impact human habitations. Individual trees each exert significant influence over their environment; a tree’s roots may affect underground utility systems; branches may affect surrounding power lines, and certain species of flowering trees when in bloom may cause serious allergies in local inhabitants. In addition, depending on species, a tree may serve as home for a variety of animals and insects which in turn exert significant ecological influence and may represent a source of potential hazard for residents. Accurate tree species classification is therefore an essential component to urban studies, city forest inventories, ecology management, and other urban planning applications.
Traditionally, identification of tree species is conducted through photogrammetric assessment of aerial photographs by an interpreter. This technique relies heavily on the breadth of the interpreter’s experience in applying spatial discrimination criteria [2,3] and is therefore more likely to be qualitative in nature. The classification is then used to identify tree species. Classification uses the features of the objects, sometimes also known as criteria [4] or descriptor [5]. The ideal features should present the highest separability of the targeted objects, which means they have the highest within-class similarity and minimum overlap inter-class. The commonly used features are spectral signatures, texture, vegetation indices (VI) and shape information etc. It can be generally divided into two approaches: the pixel-based and the object-based. The former uses pixel-based features to determine the similarity, while the latter requires segmentation first and uses the grouped pixels to generate features.
The spectral signature is the most commonly used feature. A compilation of articles of commonly used procedures, issues and applications for spectral signature comparison can be found in [6]. Leckie [7] extended the spectral signature algorithm to eight and ten band CASI imagery to facilitate classification of old growth conifer sites along the west coast of Canada. Erikson [8] used empirically radiometric and morphologic developed parameters to classify the four common species in Sweden. Larsen [9] obtained an individual 3-dimensional tree map using image geometry and contrast to identify each tree through comparison to a set of known species. However, the accuracy of spectral signature classification remains relatively low, normally less than 50% due to the high variation within-class and the high similarity inter-class. VI can be considered as an extension of spectral signature. The difference is the VI is normally dedicated to special parameters through its “formula”. VI usually serves as a filter before or as additional information in classification. In [10], the VI and spectral features were put together for inversion and NDVI was reported contributing around 50% to the model.
Texture is considered as key visual criteria when extracting information from imagery for vegetation and forestry applications [4]. In [11], four main texture approaches were identified: statistical, geometrical, model-based and signal processing. According to [12], the most popular texture approach is the statistical approach, which considers local spatial or spectral variability. However, different criteria have been used for variety of studies. In [13,14], grey-level occurrence matrix (GLOM) and grey-level co-occurrence matrix (GLCM) were introduced to recognize patterns. In [15], the standard deviation, entropy, run lengths and “fractal” roughness were investigated for their separability for urban derelict lands. In [4], the first and second order variance and homogeneity were found to be effective in distinguishing the forest age class from IKONOS imagery. In [16], texture metrics mean Euclidean distance, variance and mean were used to capture the disturbance severity across a windstorm damaged vegetation structure from IKONOS imagery. In [17], GLCM homogeneity, dissimilarity and entropy were cooperated within an object with other object-based information to identify the crops from ASTER imagery. In [11], a total of 12 features, mean, variance, entropy, correlation, contrast and second moment of GLOM and GLCM were used to extract the structural attribute of Eucalyptus plantation forest from IKONOS imagery. Texture features are rarely used by themselves. They are more commonly used as additional information combined with the spectral signature. It normally provides an extra dimension of measurements and improves the classification by 5%–15% compared to spectral only cases.
Despite that the shape of a tree has long been recognised as significant in identifying tree species, shape information is rarely used information in multi-spectral imagery classification [2]. This is due to the fact that most botanist’s shape reorganizations are based on the side view of a tree. In most of the Remote Sensing forest study, only a portion of the tree tops are visible in the imagery, which makes the reproduce of the tree shape difficult. The shape information that can be used in imagery classification is 2D top view shape. The classification mainly depends on the experiences of the imagery interpreters. On the other hand, when shape information is relatively easy to obtain, such as in LiDAR data, shape has been recognized as the primary classification parameters. Brandtberg [5] used the convex and concave contour to describe the shady side curvatures. Based on the development of the Planar shape recognizing [17], the “shape space” had been reported as a potential of the classification improvement in [18], which the planar shape represented by the angle function was explored. Shape information is proved to be useful in term of improving classification result on top of conventional spectral and texture information.
Segmentation of individual trees has been an ongoing research field for years [8]. A variety of approaches exist with an objective towards auto-locating of trees and individual crown boundary. It is commonly achieved from the analysis of high resolution spatial imagery [19]. The popular methods are, but not limited to, the template matching approaches [2022], the valley following approach [23,24], the local-maxima [25], texture grouping [26], the use of morphological operators [27], and joining of convex edges [28]. Since the segmentation is not the focus of this study, an automatic processing was tested using the watershed approach proposed by [29] for a potential of large size image processing. Even the overall segmentation accuracy was over 80%, the main errors occurred when a few trees group together. For these grouped trees, a manual detailed segmentation of collected ground validation data is necessary to enhance the accuracy. This type of manual delineation accounts for less than 10% of trees validated.
This study proposes a new classification feature derived from directional profiles for surmounting challenges associated with applying tree species classification schemes in urban settings. It was inspired by the unique growing conditions of urban trees;
  • A main error resulting from using forest classification schemes as a proxy to urban individual tree classification is the miscounting of the number of trees covered by the shadows of neighbouring buildings. In a forest setting, man-made structures are rarely present and shadows of neighbouring trees normally do not cover the entirety of surrounding trees. In this study, in order to accurately identify urban tree species on an individual basis, a scheme is made prior to segmentation to recover undetected tree masked by building shadows;
  • Urban trees are much more isolated than forest trees, making it easier to define their boundaries. Urban trees typically experience less competition for resources than their forest counterparts. Consequently, urban trees are not as constrained and are more likely to grow to the capacity of their genetically determined shape and size. In contrast to forest scenarios, high resolution spatial imagery within the urban setting is capable of capturing entire single trees instead of just tree tops or canopies. Shape information for urban trees is therefore very specific and useful in the identification of individual tree species. In this study, new vegetation parameters based on longitudinal profiles of tree crowns, which derived from shape information of trees, were developed and supplied to construct a new decision tree to uniquely distinguish species.
Trees growing on Keele campus of York University (Toronto, ON) can be considered as typical, temporal climate urban trees and are used as test cases for this study.

2. High Spatial Resolution Imagery and Ground Data Collection

2.1. Airborne High Resolution Spatial Imagery

The main Image used for this analysis is 6 cm by 6 cm high spatial resolution multi-spectral airborne imagery taken by Air Sensing Inc. on 1 August 2007 in sunny, clear conditions. The data was supplied by the York University Map Library in its raw format. The image comprises four spectral bands: Blue (460 nm), Green (570 nm), Red (670 nm) and NIR (800 nm). The image was captured at the flight height of 1282 feet. The image covers York University (Ontario, Canada) Keele campus, which is shown in Figure 1.

2.2. Ground Data Collections

A total of 213 trees on campus were documented using camera and measuring tape. The physical parameters, such as tree heights and crown sizes were measured or interpreted from images. They were identified using tree guide books [3032] or by consulting with field experts. There are a wide variety of tree species available on the York Campus. The six popular and important species selected are Maple, Ash, Birch, Oak, Spruce and Pine shown in Figure 2. The documented trees were 32 Maples (Acer rubrum, Acer platanoides), 30 Ash (Fraxinus pennsylvanica), 14 Birch (Betula lenta), 26 Oak (Quercus rubra), 15 Spruce (Picea abies) and 25 Pine (Pinus banksiana, Pinus resinosa and Pinus sylvestris). The rest were the trees belonging to other species, which were not classified.

3. Methodology

3.1. Shadow Tree Recovery

The main cause of missing out trees covered by shadows casted from nearby buildings from the imagery was the conventional usage of the Normalized Differential Vegetation Index (NDVI), which is shown in Equation (1). Most segmentation and classification approaches use NDVI to separate vegetation and non-vegetation surface as the pre-processing mask. For forest trees with dense canopy, this shows positive results. Forest trees are normally not fully covered by other trees’ shadow. In fact, only a portion of tree top is captured by imagery. However, as discussed in Section 1, the isolated trees in urban environment can cause a complete failure due to the shadow of nearby buildings. Since the shadow blocks the direct illumination, the light sources for the shadowed area are the nearby surrounding scattering. The NIR band is most significantly affected, which leads to a much lower value of NDVI. Any segmentation or classification methods using NDVI to classify the vegetation and non-vegetation surfaces would therefore give a false result.
NDVI = ( NIR R ) / ( NIR + R )
where NIR and R represents the Near-infrared and Red reflectance, respectively.
The threshold value for NDVI was determined experimentally. In this study, NDVI < 0.2 is used to classify the non-vegetation surface. The NDVI of 0.2 was selected as a trade-off between separating the trees from the background non-vegetation surface and preserving as much shaded trees as possible. However, all trees within a building shadow suffered some level of crown loss in this step. In urban cases, whole or partial trees are missed due to this inappropriate classification, as shown in Figure 3(b). Figure 3 is a subset of the campus image, which within the ellipse circled area had over 15 ash trees. The NDVI mask had missed half of the ash trees due to the shadow of the nearby Ross Building. Keep in mind that in forest applications, the shadow from nearby objects other than trees rarely exists. To solve this issue, a ratio version of the green radiance index is used: Greenness Index (GI) [33], which can be expressed as:
GI = G / ( G + B + R )
where B, G and R represents the Blue, Green and Red reflectance, respectively.
Figure 3(c) shows the performance of the GI for the same area. It was clear that for the NDVI missed trees, GI had recovered.
Using ratio to create vegetation index is a simple but effective approach that Gitelson and Merzlyak used to successfully retrieve leaf chlorophyll content [34]. GI is the green percentage of the pixel, which represents the greenness of the objects. If the object is green, then the GI is at least greater than 0.33. Since shadow covered trees are still more visually greenish, it can be detected by GI. As a result, for the trees that NDVI missed due to the shadows, GI gives a chance to recover them. Also due to the fact the living vegetation has a relatively low reflectance value in the visible ranges, it is much lower and less sensitive to GI than arbitrary green objects. Therefore, GI should be kept as low as possible.
However, GI also has its limitations. First of all, it may also detect the artificial green objects, such as green roof, pipes, etc. Second, it would miss the non-green leaf trees, such as Red Maple. Figure 3(c) is also a good sample to illustrate these weaknesses. For the green squared area, the green pipes on top of the roof were picked up, while the NDVI result did not have this error. For the red squared area, the GI missed the red maple due to leaf colour, while the NDVI result easily picked them up.
Therefore, a few filtering and smoothing approaches were used in separating the green artificial objects from the tree pixels and merged with the NDVI output. Figure 3(d) is the final processed segmentation image. This process can be summarized as three steps: (1) Filtering small segments less than 20 pixels (0.72 m2) on both NDVI and GI images. This step removed any detected small objects not likely to be a tree due to the size, which significantly filtered out the background grass and other small non-tree vegetation objects; (2) Merge NDVI and GI images; (3) Smooth the merged image, mainly the edge and filling the gaps within the segments. This output segmentation image is used as filter in the next step, shape signature collection. Therefore, it is very important to keep the tree crown completeness.

3.2. Classification

Since urban trees have a much more complete tree top view, whole tree shape information is available and can be potentially important. In this study, longitudinal profiles (further referred to as profile) of the tree crown tops were also investigated. These profiles were obtained from the following procedures. As shown in Figure 4, a red maple tree was used to demonstrate the procedure step by step. Step A: obtain the profile of the crown by recording all the values along the direction of the Sun Azimuth angle from a segmented tree. Step B: cut off the edges (drop all zero value pixels) and convert the end to end section to tree perimeter using image scale and spatial resolution. The first a few pixels may have been impacted by the nearby tree shadows. A first derivation of the raw curve was calculated and the first high peak was eliminated. This step was later proved to significantly improve the robustness of the algorithm. This study was inspired by some pioneer work done by Fourier et al. [2], which was mainly limited by the low spatial resolution of the image at that time. The longitudinal profiles of the different tree species in NIR band are summarized and shown in Figure 5.
The outlines of these plots are determined by the geometry of the illumination and viewing condition, and the tree shape. Therefore, it is essential to determine the effect of geometry of the illumination condition, especially Solar Zenith Angle (SZA) [35]. In this study, the SZA were determined by two different approaches, direct retrieval from the dataset and calculated from the field measurements, such as street-lamp height and its shadow distance. It was 50° ± 3°. From Figure 5, there is one common feature through out all species. There is always, some more obvious than others, a turning point to separate the profile into two portions. The first portion is generally higher than the second in value, which represents the sunlit and shadowed tree crown surfaces of the same tree. From simple geometry calculation, it is easy to find that SZA directly impacts on the location of this turning point, which in theory, an approximation of the tangent line of the incoming ray of the tree crown surface. It can be approximated by the Equation (3):
da = dm × 1 cos ( SZA ) 2
where da is the distance away from the beginning of the profile, dm is the diameter of the tree.
This estimation has a pretty good agreement with the observations, which provided a possibility of correcting all the images obtained from different SZA to a normalized value range. However, this is not the focus of this study. Within one image, the SZA is assumed to be the same for the entire image and any variations caused by the SZA are ignored.
Back to Figure 5, these profiles represent a few key features of the trees at a fixed SZA, such as the general outlines of the tree shape, the smoothness of the leaves surface etc. To quantify profiles geometrics, linear best fit, second order polynomial and modified free-form curve approximation were calculated from the profiles in this study. Characterizing free-form curves, such as Bezier curves, B-spline curves are classic problem in geometric design. Numerous methods have been proposed in the past decades [3639]. One of the popular approaches is to sample the given curve as a sequence of points and approximate the distance between the points. An important goal for this approach is to reduce the number of the control points that are used. In this study, since the curves are much simpler and known and predictable regulations, a modified approach adapting the concept of forming triangles in the Vectorization [40] was used. Instead of assigning multi-nodes, only one inter-curve node was used, and minimizing the linear fit of the both sides of the triangle, which is illustrated in Figure 6. Two parameters were calculated the normal distance (d) from the node to the baseline, which is the link of the two end points and the position (p) given as the ratio of the two segments of a and b, which can be expressed as:
p = a b
If the curve is flat, close to a linear fit, the d value is fairly low and p value is negative. If the curve is irregular, the d value is dramatic increased and the p is close to 1. If the curve is left skewed, the p value is less than 1 while the right skewed curve will result a greater than 1 value.
In this study, a knowledge-based decision tree was constructed. Decision tree classification is considered a fairly robust and reliable approach [41]. All available parameters, including spectral information, texture information, tree size and height, and geometry of profiles were taken into consideration. However, only the most effective ones were used in the end. The selection of the tree nodes were based on the following criteria: (1) try to pick at least one feature from each category; (2) only the most effective ones were used, which have clearly distinguished features than others; (3) try to minimize the number of the decision tree nodes used.
All spectral bands were included as candidates. Due to the atmospheric absorption, the visible bands suffered significant signal strength loss. Only the NIR band was selected. Figure 7 shows the mean, upper and lower values of each species. The NIR at value of 160 and 150 was used to separate the conifer and deciduous with Oak exception. Any value between 150 and 160 are subject to further evaluation. The GRI (Equation (5)) [33] had shown the most effective in separate Conifer and deciduous trees amount the vegetation indices, mainly competing with NDVI. The NIR and GRI combination can identify the strong featured conifer and deciduous trees. The spectral and VI information was used as the first level of screening:
GRI = ( G R ) / ( G + R )
It is not too difficult to separate deciduous from the conifers. However, it could be quite challenge to distinguish the Pine and Spruce. A combination conditions of “left skewed” (0 < p < 1/3) and “high d value” (d > 100) was used in this study. An interesting left skewed peak was noticed from the Spruce but not in Pine.
To separate the four deciduous tree species, the “left skewed” criteria was first used (0 < p < 1/2). The “left skewed” mainly separates the Maple/Birch and Ash/Oak groups. In this study, the Ash are mainly young trees, as such they share a highly complex crown structure with the Oaks. The next criteria examined was “if the curve is flat” (d < 25). The Maple has low d value due to their big leaves and mature. The Ash have a low d due to their young age and relative smaller size. This separation is confirmed also by looking at the linear regression R2. The Maple broad leaf generates a much smoother surface than the Ash; therefore, its R2 is over 0.8. Ash would hardly exceed 0.7. All four species can be separated at this stage, however, the result can be ensured by further examining the second order polynomial fit regression coefficient. The final “knowledge-based” decision tree is shown in Figure 8. There is an external terminator (iteration counter) to settle the rare cases, which the same node has been visited twice. This only can happen for “highly spectrally mixed” cases, which is most likely caused by the Oak inter-class variety. Therefore, these special cases were signed to Oak to keep no “undecided” cases.
The decision tree can also be constructed base on statistical analysis of the dataset. One of the approaches was based on the “gain criterion”, which originated from Hunts information theory [42]. This mathematical decision tree construction was also conducted in this study as comparison. The results were obtained using the commercial software C5.0 [43,44] under the license of York University Earth Observation Laboratory.

4. Results and Discussions

Out of the overall 200 trees observed on the ground, 142 trees belonged to the 6 selected species. It was first to generate a “knowledge-based” decision tree by randomly picking 50% of the data (70 trees total were picked due to the round up in numbers in different species).This random picking had been repeated a few times, and no significant variation was found. All 142 trees were then analysed by this “knowledge-based” decision tree to generate classification result, which is shown in Table 1. The overall accuracy (OA) was 84.5% and the Cohen’s Kappa Coefficient is 80.6%.
To compare the result, commercial decision tree software C5.0 was used to generate its trees and classification results. Since C5.0 requires training data, the 142 trees used in the “knowledge-based” tree classification were then divided approximately equally into two groups. One group (70 trees) was used as training while the other (72 trees) was used as validation. To better demonstrate the differences contributions of different information. Different information compounds were used and different results were generated.
First of all, only the 4 bands spectral signature approach was used for classification. Results are shown in Table 2. The features used were Blue band (100%), NIR band (100%), Green band (50%) and Red Band (32%). Percentage indicated which level and how many times this feature was used. 100% means it was used in first level, all the candidates go through this node. The overall accuracy (OA) was 48.6 % and the Cohen’s Kappa Coefficient is 35.7%.
Next, the vegetation indices and texture information was added to the band signatures approach. Results are shown in Table 3. The feature used were NDVI (100%), Blue band texture (60%), size (57%) and GRI (37%). The overall accuracy (OA) was 75.0% and the Cohen’s Kappa Coefficient is 68.7%.
Last, the profile derived indices were added to the data, which makes it a combination of profile, vegetation indices, textures and band signatures. The results are shown in Table 4. The features used were d (100%), p (100%), NDVI (88%), GRI (55%) and linear R2 (27%). The overall accuracy (OA) was 86.1% and the Cohen’s Kappa Coefficient is 82.6%.
The normal distance (d), position ratio (p) and linear R2 are mainly derived from the profile information. NDVI and GRI are also important and selected in Table 4 as expected. It was surprising that the texture was completely ignored in the C5.0 decision tree. The benefits of introducing profile-based information are obvious. The classification result improved on every species comparing to Table 3, which includes all information except profiles. Table 3 delivers a decent 75% total accuracy. It shows the similar trend of classification selected as the “knowledge-based” tree, which tries to keep away from directly using spectral bands. The features were vegetation index (NDVI and GRI), the physical parameter (size) and spatial information (texture). This strongly agrees with the initial motivation and inspiration of this study. For uncalibrated imagery with nearby building shadow issues, spectral information is the least reliable. The information that are independent of spectral information will minimize the negative impact and will be expected to improve classification.

5. Summary and Conclusions

As the results shown in Section 4, the longitudinal profiles have been proved to be valuable additional information to improve the individual tree species classification when using very high spatial resolution airborne imagery. For both designed and C5.0 generated decision trees, the accuracy of overall classification results involving profiles were at 84.5% and 86.1%, which are much better than trees generated from non-profile included cases (75% or less). The longitudinal profiles approach is typically suitable for high spatial resolution imagery in the urban environment. It was clear that the shapes of trees are strong signatures of tree species recognition. In urban environment, trees tend to grow into their natural shape due to low competition. Trees are more isolated and have clear boundaries. The end to end longitudinal profiles are not as difficult to obtain as in forestry cases. The profile information can directly represent tree shape, which is a more favourable side view. It is not affected by the nearby building shadow, which brings more robustness to the spectral signature issues. It also can quickly estimate the level of the tree crown variation, which is highly correlated with texture and leaf size and shapes without spectral variation problems. These features are very important for non-calibrated high spatial resolution imagery without calibrations.
In this study, one important improvement is the separation of Pine and Spruce, which directly results from including profile information in the classification. The Pine and Spruce have similar spectra and both have needle leaves, which can be a challenge for spectral classification, but the high spatial resolution profile can capture the needle orientation and branch differences to improve the classification. For other species, Maple was the most stable specie that can be estimated at a reasonable accuracy (62.5% or higher), followed by Ash. In this study, most Maple and Ash trees on the ground are in the similar growth stage. Most Maple were mature and healthy, therefore, it was expected to have a reasonable retrieval accuracy. On the other hand, Ash were young, healthy but suffer significantly from the shadow issue. Once the GI was implemented to reduce the segmentation error, profile information can strongly compensate the spectral differences caused by the shadow.
In forestry cases, trees are subjected to high competition with neighbour trees. Airborne imagery normally can only see the top of crowns, which is the focus of the current shape recognition studies. The competitive growing condition dramatically changes the outline of the trees, increasing the difficulty of effectively using the shape information in classification.
Due to the volume of the ground data, limited validations were conducted. The profile can not get away from the biggest challenge in species classification using remote sensing data, the variability in the properties derived from remote sensing data between trees with the same species caused by various factors (within class variability), and the inter-class similarity. It should be applied with the other information together to maximize the classification accuracy. Coefficients reassessment and “knowledge-based” tree modification may be needed if applying this method to a new study area. The results were more of a demonstration of the potential of using longitudinal profiles in classification. More validation work is needed in the future studies. The end to end profiles are specifically designed for urban tree cases in this study. However, it is still potentially possible to apply it in forestry study, which would use top portion only. It means partial profile to reconstruct the side view tree shape is valuable in the future investigation.

Acknowledgments

The authors are grateful for the financial support through research grants provided by the Natural Sciences and Engineering Research Council (NSERC) of Canada. The authors wish to thank Linhai Jin for his kindly help in MATLAB programming and Jerusha Lederman and Justin Robinson for overall proofreading.

References

  1. Tooke, T.R.; Coops, N.C.; Goodwin, N.R.; Voogt, J.A. Extracting urban vegetation characteristics using spectral mixture analysis and decision tree classifications. Remote Sens. Environ 2009, 113, 398–407. [Google Scholar]
  2. Fourier, R.A.; Edwards, G.; Eldridge, N.R. A catalogue of potential spatial discriminators for high spatial resolution digital images of individual crowns. Can. J. Remote Sens 1995, 3, 285–298. [Google Scholar]
  3. Waser, L.T.; Ginzler, C.; Kuechler, M.; Baltsavias, E.; Hurni, L. Semi-automatic classification of tree species in different forest ecosystems by spectral and geometric variables derived from Airborne Digital Sensor (ADS40) and RS30 data. Remote Sens. Environ 2011, 115, 76–85. [Google Scholar]
  4. Franklin, S.E.; Wulder, M.A.; Gerylo, G.R. Texture analysis of IKONOS panchromatic data for Douglas-fir forest age class separability in British Columbia. Int. J. Remote Sens 2001, 22, 2627–2632. [Google Scholar]
  5. Brandtberg, T. Individual tree-based species classification in high spatial resolution aerial images of forests using fuzzy sets. Fuzzy Sets Syst 2002, 132, 371–387. [Google Scholar]
  6. Hill, D.A.; Leckie, D.G. (Eds.) International Forum: Automated Interpretation of High Spatial Resolution Digital Imagery for Forestry; Pacific Forestry Centre: Victoria, BC, Canada, 10–12; February 1998.
  7. Leckie, D.G.; Gougeon, F.A.; Walsworth, N.; Paradine, D. Stand delineation and composition estimation using automated individual tree crown analysis. Remote Sens. Environ 2003, 88, 355–369. [Google Scholar]
  8. Erikson, M. Species classification of individually segmented tree crowns in high resolution aerial images using radiometric and morphologic image measures. Remote Sens. Environ 2004, 91, 469–477. [Google Scholar]
  9. Larsen, M. Sing tree species classification with a hypothetical multi-spectral satellite. Remote Sens. Environ 2007, 110, 523–532. [Google Scholar]
  10. Pena-Barragan, J.M.; Ngugi, M.K.; Plant, R.E.; Six, J. Object-based crop identification using multiple vegetation indices, textural features and crop phenology. Remote Sens. Environ 2011, 115, 1301–1306. [Google Scholar]
  11. Coburn, C.A.; Roberts, A.C.B. A multiscale texture analysis procedure for improved forest stand classification. Int. J. Remote Sens 2004, 130, 40. [Google Scholar]
  12. Gebreslasie, M.T.; Ahmed, F.B.; van Aardt, J.A.N. Extracting structural attributes from IKONOS imagery for Eucalyptus plantation forests in KwaZulu-Natal, South Africa, using image texture analysis and artificial neural networks. Int. J. Remote Sens 2011, 32, 7677–7701. [Google Scholar]
  13. Haralick, R.M. Statistical and structural approach to texture. Proc. IEEE 2011, 67, 786–804. [Google Scholar]
  14. St-Louis, V.; Pidgeon, A.M.; Radeloff, V.C.; Hawbaker, T.J.; Tuceryan, M.; Jain, A.K. Handbook of Pattern Reorganization and Computer Vision; Singapore World Scientific: Singapore, Singapore, 1998. [Google Scholar]
  15. Dawson, B.R.P.; Parsons, A.J. Texture measures for the identification and monitoring of urban derelict land. Int. J. Remote Sens 1994, 5, 1259–1271. [Google Scholar]
  16. Rich, R.L.; Frelich, L.; Reich, P.B.; Bauer, M.E. Detecting wind disturbance severity and canopy heterogeneity in boreal forest by coupling high spatial resolution satellite imagery and field data. Remote Sens. Environ 2010, 114, 299–308. [Google Scholar]
  17. Klassen, E.; Srivastava, A.; Mio, W.; Joshi, S.H. Analysis of planar shapes using geodesic paths on shape spaces. IEEE Trans. Pattern Anal. Mach. Intell 2004, 26, 372–383. [Google Scholar]
  18. Kulikova, M.S. Shape Recognition for Image Scene Analysis. PhD Thesis, Universite de Nice Sophia-Antipolis, Nice, France. 2009. [Google Scholar]
  19. Leckie, D.G.; Gougeon, F.A.; Tinis, S.; Nelson, T.; Burnett, C.N.; Paradine, D. Automated tree recognition in old growth conifer stands with high resolution digital imagery. Remote Sens. Environ 2005, 94, 311–326. [Google Scholar]
  20. Larsen, M. Crown Modeling to Find Tree Top Positions in Aerial Photographs. Proceedings of the Third International Airborne Remote Sensing Conference and Exhibition, Copenhagen, Denmark, 7–10 July 1997.
  21. Larsen, M.; Rudemo, M. Optimizing templates for finding trees in aerial photographs. Pattern Recognit. Lett 1998, 19, 1153–1162. [Google Scholar]
  22. Larsen, M.; Rudemo, M. Approximate Bayesian estimation of a 3D point pattern from multiple views. Pattern Recognit. Lett 2004, 25, 1359–1368. [Google Scholar]
  23. Gougeon, F.A. A crown-following approach to the automatic delineation of individual tree crown closure from airborne multispectral images. Can. J. Remote Sens 1995, 25, 274–284. [Google Scholar]
  24. Gougeon, F.A.; Leckie, D.G. Forest Information Extraction From High Resolution Images Using an Individual Tree Crown Approach; Information Report BC-X-396; Natural Resources Canada, Canadian Forest Service: Victoria, BC, Canada, 2003. [Google Scholar]
  25. Wulder, M.A.; Hall, R.J.; Coops, N.C.; Franklin, S.E. High spatial resolution remotely sensed data for ecosystem characterization. BioScience 2004, 54, 511–521. [Google Scholar]
  26. Warner, T.A.; Lee, J.Y.; McGraw, J.B. Delineation and Identification of Individual Trees in the Eastern Deciduous Forest. Proceedings of the International Forum: Automated Interpretation of High Spatial Resolution Digital Imagery for Forestry, Victoria, BC, Canada, 10–12 February 1998.
  27. Barbezat, V.; Jacot, J. The CLAPA Project: Automated Classification of Forest with Aerial Photographs. Proceedings of the International Forum: Automated Interpretation of High Spatial Resolution Digital Imagery for Forestry, Victoria, BC, Canada, 10–12 February 1998.
  28. Brandtberg, T.; Walter, F. Automated delineation of individual tee crowns in high spatial resolution aerial images by multiple-scale analysis. Mach. Vision Appl 1998, 11, 64–73. [Google Scholar]
  29. Jin, L.; Hu, B.; Noland, T.; Li, J. An individual tree crown delineation method based on multi-scale segmentation of imagery. ISPRS J. Photogramm 2011, 70, 88–98. [Google Scholar]
  30. Farrar, J.L. Trees in Canada; Fitzhenry and Whiteside Limited and the Canadian Forest Service Natural Resource Canada: Markham, ON, Canada, 1995. [Google Scholar]
  31. Kershaw, L. Trees of Ontario; Lone Pine Publishing: Edmonton, AB, Canada, 2001. [Google Scholar]
  32. Sibley, D.A. The Sibley Guide to Trees; Alfred, A., Ed.; Knopf: New York, NY, USA, 2009. [Google Scholar]
  33. Tucker, C.J. Red and photographic infrared linear combinations for monitoring vegetation. Remote Sens. Environ 1979, 8, 12–150. [Google Scholar]
  34. Gitelson, A.; Merzlyak, M.N. Signature analysis of leaf reflectance spectra: Algorithm development for remote sensing of chlorophyll. J. Plant Physiol 1996, 148, 494–500. [Google Scholar]
  35. Puttonen, E.; Litkey, P.; Hyypp, J. Individual tree species classification by illuminatedshaded area separation. Remote Sens 2010, 2, 19–35. [Google Scholar]
  36. Chen, X.; Ma, W.; Paul, J. Cubic B-spline curve approximation by curve unclamping. Comput.-Aided Des 2010, 42, 523–524. [Google Scholar]
  37. Chuang, S.-H.F.; Kao, C.Z. One-sided arc approximation of B-spline curves for interference-free offsetting. Comput.-Aided Des 1999, 31, 111–118. [Google Scholar]
  38. Shih, J.-L.; Chuang, S.-H.F. One-sided offset approximation of freeform curves for interference-free NURBS machining. Comput.-Aided Des 2008, 40, 931–937. [Google Scholar]
  39. Yuksel, C.; Schaefer, S.; Keyser, J. Parameterization and applications of Catmull-Rom curves. Comput.-Aided Des 2011, 43, 747–755. [Google Scholar]
  40. Zhang, S.; Li, L.; Seah, H. Vectorization of digital images using algebraic curves. Comput. Graph 1998, 22, 91–101. [Google Scholar]
  41. Heumann, B.W. An object-based classification of mangroves using a hybrid decision treesupport vector machine approach. Remote Sens 2012, 3, 2440–2460. [Google Scholar]
  42. Hunt, E.B. Artificial Intelligence; Academic Press: New York, NY, USA, 1966. [Google Scholar]
  43. Quinlan, J.R. Decision Trees and Multi-Valued Attributes. In Machine Intelligence; Hayes, J.E., Michie, D., Richard, J., Eds.; Oxford University Press: Oxford, UK, 1988; Volume 11, pp. 305–318. [Google Scholar]
  44. Quinlan, J.R. C4.5 Programs for Machine Learning; Morgan Kaufmann Publishers: Los Altos, CA, USA, 1993. [Google Scholar]
Figure 1. True colour high spatial resolution image of York University, Toronto, Ontario, Canada.
Figure 1. True colour high spatial resolution image of York University, Toronto, Ontario, Canada.
Remotesensing 04 01741f1
Figure 2. Top left to right: Maple, Ash, and Birch; bottom left to right: Oak, Spruce, and Pine.
Figure 2. Top left to right: Maple, Ash, and Birch; bottom left to right: Oak, Spruce, and Pine.
Remotesensing 04 01741f2
Figure 3. From left to right (a) the true color original image with location indication; (b) the NDVI filtering image; (c) the GI filtering image; (d) the NDVI+GI merged image with smoothing and shape filtering.
Figure 3. From left to right (a) the true color original image with location indication; (b) the NDVI filtering image; (c) the GI filtering image; (d) the NDVI+GI merged image with smoothing and shape filtering.
Remotesensing 04 01741f3
Figure 4. The illustrated procedure of obtaining longitudinal profile of a tree crown.
Figure 4. The illustrated procedure of obtaining longitudinal profile of a tree crown.
Remotesensing 04 01741f4
Figure 5. The longitudinal profiles of the six different species.
Figure 5. The longitudinal profiles of the six different species.
Remotesensing 04 01741f5
Figure 6. (Top): the linear and second order polynomial fit; (Bottom): the single node triangles in the Vectorization of the profiles.
Figure 6. (Top): the linear and second order polynomial fit; (Bottom): the single node triangles in the Vectorization of the profiles.
Remotesensing 04 01741f6
Figure 7. The NIR (Left) and GRI (Right) mean and maximum/minimum boundaries for six tree species.
Figure 7. The NIR (Left) and GRI (Right) mean and maximum/minimum boundaries for six tree species.
Remotesensing 04 01741f7
Figure 8. The knowledge-based decision tree constructed.
Figure 8. The knowledge-based decision tree constructed.
Remotesensing 04 01741f8
Table 1. Classification result using knowledge-based tree.
Table 1. Classification result using knowledge-based tree.
SpeciesMapleAshSprucePineOakBirch
Maple2910002
Ash3250011
Spruce0010302
Pine0042010
Oak1000241
Birch0020012
OA =84.5%
Table 2. The C5.0 decision classification results with 4-band spectral information only.
Table 2. The C5.0 decision classification results with 4-band spectral information only.
SpeciesMapleAshSprucePineOakBirch
Maple1030003
Ash1110030
Spruce332000
Pine0201010
Oak650011
Birch302011
OA =48.6%
Table 3. The C5.0 decision tree classification result with spectral, VI and Texture information.
Table 3. The C5.0 decision tree classification result with spectral, VI and Texture information.
SpeciesMapleAshSprucePineOakBirch
Maple1320001
Ash0120030
Spruce116000
Pine0111010
Oak110092
Birch101014
OA =75.0%
Table 4. The C5.0 decision tree classification results with all information (Spectral, texture, profiles etc.) together.
Table 4. The C5.0 decision tree classification results with all information (Spectral, texture, profiles etc.) together.
SpeciesMapleAshSprucePineOakBirch
Maple1410001
Ash0140010
Spruce017000
Pine0101200
Oak0100111
Birch102004
OA =86.1%

Share and Cite

MDPI and ACS Style

Zhang, K.; Hu, B. Individual Urban Tree Species Classification Using Very High Spatial Resolution Airborne Multi-Spectral Imagery Using Longitudinal Profiles. Remote Sens. 2012, 4, 1741-1757. https://doi.org/10.3390/rs4061741

AMA Style

Zhang K, Hu B. Individual Urban Tree Species Classification Using Very High Spatial Resolution Airborne Multi-Spectral Imagery Using Longitudinal Profiles. Remote Sensing. 2012; 4(6):1741-1757. https://doi.org/10.3390/rs4061741

Chicago/Turabian Style

Zhang, Kongwen, and Baoxin Hu. 2012. "Individual Urban Tree Species Classification Using Very High Spatial Resolution Airborne Multi-Spectral Imagery Using Longitudinal Profiles" Remote Sensing 4, no. 6: 1741-1757. https://doi.org/10.3390/rs4061741

Article Metrics

Back to TopTop