Next Article in Journal
The SUMO Ship Detector Algorithm for Satellite Radar Images
Next Article in Special Issue
Detection and Segmentation of Vine Canopy in Ultra-High Spatial Resolution RGB Imagery Obtained from Unmanned Aerial Vehicle (UAV): A Case Study in a Commercial Vineyard
Previous Article in Journal
UAV-Based Optical Granulometry as Tool for Detecting Changes in Structure of Flood Depositions
Previous Article in Special Issue
Individual Tree Detection and Classification with UAV-Based Photogrammetric Point Clouds and Hyperspectral Imaging
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Combining Spectral Data and a DSM from UAS-Images for Improved Classification of Non-Submerged Aquatic Vegetation

1
Department of Aquatic Sciences and Assessment, Swedish University of Agricultural Sciences, SE-75007 Uppsala, Sweden
2
Department of Forest Resource Management, Swedish University of Agricultural Sciences, SE-90183 Umeå, Sweden
3
Department of Wildlife, Fish, and Environmental Studies, Swedish University of Agricultural Sciences, SE-90183 Umeå, Sweden
*
Author to whom correspondence should be addressed.
Remote Sens. 2017, 9(3), 247; https://doi.org/10.3390/rs9030247
Submission received: 30 November 2016 / Revised: 17 February 2017 / Accepted: 1 March 2017 / Published: 7 March 2017
(This article belongs to the Special Issue Recent Trends in UAV Remote Sensing)

Abstract

:
Monitoring of aquatic vegetation is an important component in the assessment of freshwater ecosystems. Remote sensing with unmanned aircraft systems (UASs) can provide sub-decimetre-resolution aerial images and is a useful tool for detailed vegetation mapping. In a previous study, non-submerged aquatic vegetation was successfully mapped using automated classification of spectral and textural features from a true-colour UAS-orthoimage with 5-cm pixels. In the present study, height data from a digital surface model (DSM) created from overlapping UAS-images has been incorporated together with the spectral and textural features from the UAS-orthoimage to test if classification accuracy can be improved further. We studied two levels of thematic detail: (a) Growth forms including the classes of water, nymphaeid, and helophyte; and (b) dominant taxa including seven vegetation classes. We hypothesized that the incorporation of height data together with spectral and textural features would increase classification accuracy as compared to using spectral and textural features alone, at both levels of thematic detail. We tested our hypothesis at five test sites (100 m × 100 m each) with varying vegetation complexity and image quality using automated object-based image analysis in combination with Random Forest classification. Overall accuracy at each of the five test sites ranged from 78% to 87% at the growth-form level and from 66% to 85% at the dominant-taxon level. In comparison to using spectral and textural features alone, the inclusion of height data increased the overall accuracy significantly by 4%–21% for growth-forms and 3%–30% for dominant taxa. The biggest improvement gained by adding height data was observed at the test site with the most complex vegetation. Height data derived from UAS-images has a large potential to efficiently increase the accuracy of automated classification of non-submerged aquatic vegetation, indicating good possibilities for operative mapping.

Graphical Abstract

1. Introduction

Unmanned aircraft systems (UASs) offer a potential data source for detailed surveying of aquatic vegetation with images having centimetre-level spatial resolutions [1]. At this high level of spatial resolution, distinguishing structural details of individual plants is possible, for example, floating leaves on the water surface. In a previous study [2], we showed that a true-colour UAS-orthoimage with 5-cm pixels allowed for automated classification of growth-forms and six dominant taxa of non-submerged aquatic vegetation in a lake in northern Sweden. In the previous study, we used object-based image analysis and a Random Forest classification based on spectral and textural features. Using polygons as the spatial assessment unit, overall accuracies ranged from 56% to 83% for the growth-form level and from 52% to 69% for the dominant-taxon level. We found that classification accuracy decreased with increasing vegetation complexity. One main reason for misclassification was the confusion of taxa that appeared similar in the UAS-orthoimage, but belonged to different growth forms (emergent versus floating-leaved). While floating-leaved vegetation generally grows a couple of centimetres above the water surface, emergent plants can reach a considerable vegetation height, for example, 1–4 m for Phragmites australis, 1–3 m for Schoenoplectus lacustris, and 0.3–1.5 m for Equisetum fluviatile [3].
Several remote sensing methods are available which can provide three-dimensional (3D) data on surface and vegetation height, for example, LiDAR [4]. A method with a long tradition is stereoscopy that uses pairs of photographs of the same area taken from different positions to measure the height of objects on the Earth’s surface [5]. Thanks to recent developments in digital photogrammetry and computer vision, in particular the Structure-from-Motion approach, dense 3D point clouds can be built directly from overlapping images, for example, from images taken from a UAS [6,7]. Cunliffe et al. [8] showed that this approach is able to resolve the vegetation structure even for small plants such as grasses and shrubs. A variety of software programs for UAS-image processing are available that can produce digital surface models [here called UAS-DSMs] and orthorectified image mosaics (here called UAS-orthoimages; [7]).
The majority of research on using height data in combination with spectral data for the classification of vegetation has been primarily made related to forestry (e.g., [9,10,11]). However, there are considerable differences in height and spatial extent between trees and aquatic plants. Reese et al. [12,13] studied the use of spectral data, elevation data (elevation above sea level, slope, and a wetness index), and 3D data from image matching (10 m × 10 m grid cells), as well as laser scanning (average point density of 1.4 m−2) for low growing mountain vegetation classes including shrubs and grasses under 2 m in height. The overall classification accuracy increased with the inclusion of 3D data from both image matching and laser scanning compared to using spectral and elevation data alone. Gillan et al. [14] estimated the height of low-growing rangeland vegetation from aerial stereo images with 3-cm pixels and found a good correlation between field measurements and the vegetation height estimated from the images, even though the latter tended to be underestimated. Rampi et al. [15] combined LiDAR data (digital elevation model, DSM, compound topographic index, and intensity) and aerial imagery to map wetlands versus other land cover types in three ecoregions with an object-based image analysis approach and achieved high classification accuracies (>90%).
Height data derived from UAS-images, in most cases with Structure-from-Motion software, has been used, for example, for landslide detection [16] and segmentation of buildings [17]. Lechner et al. [18] mapped the vegetation extent of upland swamps surrounded by eucalyptus woodland in Australia. The UAS was equipped with two cameras to record the visible and infrared spectrum with a resolution of 4 cm. A UAS-DSM derived from the infrared images was used for an initial classification which was then refined using spectral information from true-colour UAS-images, among other parameters. Kuria et al. [19] analysed seasonal vegetation changes in a Tanzanian wetland, based on 0.8 m spatial resolution true-colour image data acquired with a UAS, a DSM derived from UAS-images, and commercial radar data. Thirteen land cover classes were identified, including several classes with emergent aquatic vegetation. The land cover classes were then combined to five generalized classes which showed an overall accuracy of ~90% for two analysed seasons [19]. Tamminga et al. [20] used a true-colour UAS-orthoimage with 2.5-cm pixels to map geomorphic and aquatic habitat features for river management. In addition to the mapping, a digital elevation model derived from the UAS-images was used for hydrodynamic modelling of water depth and velocity. Boon et al. [21] delineated wetlands and assessed wetland vegetation health in South Africa based on a true-colour 1.8-cm spatial resolution UAS-orthoimage in combination with height data from 3D point clouds (3.8-cm resolution in the wetland area and 29-cm resolution in the surrounding area). They found that the inclusion of height data significantly enhanced wetland delineation and classification. These studies demonstrate that height data can contribute critical information in the classification process and have the potential to increase classification accuracy as compared to the use of spectral data from UAS-images alone.
In this study we test the hypothesis that the inclusion of height data derived from overlapping UAS-images in the form of a UAS-DSM in addition to spectral and textural features from a UAS-orthoimage increases the classification accuracy of non-submerged aquatic vegetation at two levels of thematic detail: (a) Growth forms and (b) dominant taxa in comparison to a classification which only used spectral and textural features from the UAS-orthoimage [2]. In particular we expect that the confusion between taxa that appear spectrally similar in the UAS-orthoimage, but belong to different growth forms, will be reduced.

2. Materials and Methods

2.1. Study Area, Image Acquisition, Test Sites, and Aquatic Plant Taxa

Lake Ostträsket (64°55′N, 21°02′E) is a boreal lake in northern Sweden with a surface area of 1.8 km2 [2]. The littoral zone of the lake was surveyed with a miniature fixed-wing aircraft of the type flying wing, the Personal Aerial Mapping System (PAMS) by SmartPlanes AB (Skellefteå, Sweden), in August 2011 [2]. The PAMS was equipped with an off-the-shelf Canon Ixus 70® digital compact camera (Canon Inc., Tokyo, Japan) with a seven megapixel sensor and an RGB colour filter (380–750 nm). The camera was fitted into the body of the aircraft and collected nadir image data. The flying height of 150 m resulted in a ground sampling distance of 5.6 cm. The along- and across-track image overlap was set to 70%. A true orthoimage was produced by an external image provider with Inpho® software (INPHO GmbH, Stuttgart, Germany) based on a high resolution surface model with a grid size of 30 cm derived from a dense point cloud (100–400 points per m2) and was georeferenced using ground control points identified in aerial photographs from the Swedish National Land Survey (spatial resolution 0.5 m). The internal planar precision of the orthoimage (i.e., the relative errors within individual photo blocks) was 4–5 cm and the internal height precision was 8–9 cm. The pixel size of the produced orthoimage was 5 cm. More details on PAMS, the camera, weather conditions, and orthoimage production can be found in Husson et al. [2]. From the resulting true-colour UAS-orthoimage with 5-cm pixels, five test sites (100 m × 100 m each) were selected [2]: four sites represented the natural variability of the lake with varying vegetation complexity (proportion of mixed vegetation stands, vegetation cover and density, and taxa composition). Vegetation complexity increased from Site I to Site IV. At Site I there were mainly single-taxa stands surrounded by open water. Site IV was almost entirely covered by mixed vegetation. At the fifth test site, the UAS-orthoimage had poor image quality caused by a combination of wave action, blur, specular reflection of clouds, and sunglint. This test site showed medium vegetation complexity and was included to test the classification robustness given poor image quality. Non-submerged aquatic plant taxa present at Lake Ostträsket include helophytes (i.e., emergent taxa: Equisetum fluviatile, Schoenoplectus lacustris, and Phragmites australis) as well as nymphaeids (i.e., floating-leaved taxa: Potamogeton natans, Sparganium spp., and Nymphaea/Nuphar spp.; [2], Figure S1). Sparse and dense stands of E. fluviatile looked very different and were therefore divided into two classes: sparse stands of E. fluviatile having 10%–50% area coverage (from now on called “sparse E. fluviatile”) and dense stands of E. fluviatile having >50% area coverage (from now on called “dense E. fluviatile”; [2]). Not all taxa were present at all test sites (Table 1). More details on Ostträsket, the five test sites, and the vegetation can be found in Husson et al. [2].

2.2. Height Data

Point cloud data from which vegetation height was determined for the five test sites was derived from the original overlapping single UAS-images using the Structure-from-Motion software PhotoScan® (Professional edition, v. 1.2.4, Agisoft LLC, St. Petersburg, Russia), because the original UAS-DSM from the orthoimage production by the external image provider was not available for this study. For each test site, the images of the respective flight block and the camera positions recorded during image acquisition were loaded into PhotoScan®. The user-specified accuracy of the camera coordinates was set to 10 m. Blurred images were excluded from the dataset and the images were aligned (accuracy: high; pair preselection: reference; key point limit: 40,000; tie point limit: 4000). Points with low image count (≤2), high reprojection error (≥0.45), and low projection accuracy (≥40) were deleted. Camera alignment optimisation was then run in PhotoScan® and the surface was reconstructed (building of mesh (face count: low) and texture). Ground control points with coordinates measured in the field were not available for our study. To minimize the displacement of the DSM compared to the orthoimage which was georeferenced from the beginning, 12–25 reference points per test site were selected in the orthoimage and manually placed as markers for georeferencing in PhotoScan®. The reference points were evenly distributed throughout the whole test site with additional reference points at the corners and in the centre of the image. The user-specified accuracy of the reference points was set to 0.005 m in PhotoScan. We manually added reference points until the total XY error (average mean square error) of the reference points reached a minimum, indicating that the displacement between the DSM and the orthoimage was at a minimum. The displacement (i.e., the total XY error) at the five test sites was ≤5.6 cm. We did not have any field-based height measurements for these points. Therefore, we selected reference points at the water surface, mainly floating leaves, and assigned a value of zero as the height. Since we used the water surface as the zero height level we also subtracted 15 m from the z-values of the recorded camera positions because Lake Ostträsket is located at 15 m above sea level. When the markers were placed, a dense point cloud was produced (quality setting: high; depth filtering: mild) that was used to build a DSM (Figure 1). Due to the movement of waves and since the UAS-images were not taken simultaneously, image matching was difficult for open water surfaces and some sparse vegetation stands at Site V in regions with poor image quality. We therefore disabled interpolation for the DSM production. Initial tests showed that DSMs produced with interpolation displayed unreasonably high and/or low height values in areas with open water. By disabling interpolation, the areas with unreasonable height values could be reduced. The produced DSMs for the five test sites had a ground sampling distance of ~9 cm and a point density of ≥106 points per m2. The UAS-DSMs were exported as GeoTiff-files with a pixel size of 5 cm in order to match the pixel size of the UAS-orthoimage.

2.3. Object-Based Image Analysis and Accuracy Assessment

Object-based image analysis in combination with Random Forest classification was applied to the five test sites using the software eCognition Developer® (v. 9.1, Trimble Germany GmbH, Munich, Germany). In the current study, we performed the Random Forest classification on the growth-form and the dominant-taxon level based on spectral and textural features from the UAS-orthoimage as described in Husson et al. [2] with the only difference being that the UAS-DSM was loaded as an additional image layer in eCognition® and a new feature for classification was added: “Mean DSM” (i.e., the mean height value for each segment). The growth-form level included three classes: Water, nymphaeid, and helophyte. For the classification at the dominant-taxon level, the water area was masked out [2]. We defined the dominant taxon as the taxon that covered the largest area in the respective vegetation stand. Seven classes were included according to the dominant taxa given in Table 2. To ensure full comparability between the current study and Husson et al. [2], the segments from Husson et al. [2] obtained by “Multiresolution Segmentation” based on the UAS-orthoimage alone, were used. For each test site, 40 training sample-segments per class were randomly selected from the sites’ “Reference Maps” which were manually produced with visual interpretation. More details on the production of the Reference Maps are provided in Husson et al. [2], including the classification scheme for visual interpretation with image examples for all taxa in the Supplementary Materials. At Site I, we selected only 30 training samples per dominant taxa class because there were two classes with a low total number of segments (59 and 60 segments; [2]). The training sample-segments used in both studies were identical. Also, for accuracy assessment we used exactly the same validation sample-segments as in Husson et al. ([2]; i.e., 350 randomly selected sample-segments per site). In cases where a class was represented by only a few segments, we randomly selected more samples so that there were at least ten segments for that class [2]. To evaluate the classification, a segment-based error matrix was produced. We calculated overall, Producer’s, and User’s accuracy, Cohen’s Kappa coefficient [22], as well as overall quantity disagreement and overall allocation disagreement [23]. Producer’s accuracy is the probability that a polygon belonging to a given category on the ground has also been labelled that category in the map while User’s accuracy is the probability that a polygon classified into a given category actually represents that category on the ground. The Kappa statistic takes into account the fact that even assigning labels at random results in a certain degree of accuracy. The Kappa coefficient has however been criticised as being misleading and redundant in remote sensing applications [23,24]; for this reason we have also included quantity and allocation disagreement as an alternative to Kappa [23]. Overall quantity disagreement is defined as the difference between two data sets due to an imperfect match in proportions of the mapped categories. Overall allocation disagreement is defined as the difference between two data sets due to an imperfect match between the spatial allocations of the mapped categories. Since the validation sample-segments varied in size, we also produced an area-based error matrix (related to the number of pixels inside the selected validation segments) and calculated the overall, Producer’s, and User’s accuracies to evaluate the map’s usability by assessing the correctly classified area as proposed by Radoux et al. [25].

2.4. Statistical Analysis

We used non-metric, one-sided Wilcoxon matched pairs signed-rank tests [26] at both levels of thematic detail to test for a potential increase in classification accuracy due to incorporation of height data. In other words, we compared the results from our previous study which were based on spectral and textural features from the UAS-orthoimage only [2], to our new results which were based on spectral and textural features from the UAS-orthoimage and height data from the UAS-DSM. For overall accuracy (both segment- and area-based), Kappa, overall quantity disagreement, and overall allocation disagreement, the sample size N was five and we used a significance level of α < 0.05. We also compared the Producer’s and User’s accuracies (both segment-, and area-based) with and without height data at each level of thematic detail for all classes and sites combined. Here the sample size was larger (N = 15 for growth forms and N = 27 for dominant taxa) and we used three significance levels, namely * for α < 0.025, ** for α < 0.005, and *** for α < 0.0005. For the statistical analysis we used Statistica® software (v. 13, Dell Inc., Tulsa, OK, USA).

3. Results

The use of height data combined with spectral and textural features resulted in better classification results at all test sites, as compared to using spectral and textural features alone. The increase in overall accuracy (segment- and area-based) was statistically significant at both levels of thematic detail (T5 = 0, p = 0.043 for all four tests).
At the growth-form level, the use of height combined with spectral and textural features gave an overall accuracy of 78%–87% (segment-based). For the five test sites, this represented an average increase of 10% (range 4%–21%, Table 2). The increase in accuracy was largest at Sites IV and V (Table 2). The correctly classified area after the inclusion of height data was 74%–95% (Table 2, area-based overall accuracy). For the five test sites, this represented an average increase of 7% (range 1%–20%, Table 2).
At the dominant-taxon level, the use of height data combined with spectral and textural features gave an overall accuracy of 66%–85% (segment-based). For the five test sites, this represented an average increase of 16% (range 3%–30%, Table 2). The correctly classified area after the inclusion of height data was 63%–86% (Table 2, area-based overall accuracy) which represented an average increase of 14% for the five test sites (range 2%–30%, Table 2). The largest increase (30%) in overall accuracy (segment- and area-based) was observed at Site IV (Table 2, Figure 2e,f), which was the site with the most complex vegetation and that showed the lowest overall classification accuracy in the study without height data [2]. In comparison to the other test sites, the increase at Site III was small (3% in the segment-based and 2% in the area-based assessment; Table 2). The orthoimages and produced vegetation maps for dominant taxa at Sites I, II, IV, and V are included in Figure 2.
Kappa increased for all test sites when the height data were added (Table 2). At both levels of thematic detail the increase of Kappa was statistically significant (T5 = 0, p = 0.043 for both tests). Overall quantity disagreement increased slightly at Site V (Table 2), but decreased at all other sites (Table 2), however, the difference was not statistically significant (growth-form level: T5 = 1.5, p = 0.110; dominant-taxon level: T5 = 2.0, p = 0.140). Overall allocation disagreement decreased significantly at all test sites and levels of thematic detail (Table 2; T5 = 0, p = 0.043 for both levels of thematic detail). At both levels of thematic detail, the overall quantity disagreement was larger than the overall allocation disagreement at the majority of sites. Complete error matrices for both levels of thematic detail at all test sites and differences between error matrices from the classifications with and without height data are included in the Supplementary Material (Table S1).
Producer’s and User’s accuracies at the growth-form level increased by including height data for the majority of classes and sites (Table 3), and the overall increases of segment- and area-based Producer’s and User’s accuracies were statistically significant (T and p values given in Table 3). The largest increases were observed for the classes of nymphaeid and helophyte at Sites IV and V (Table 3). For water, Producer’s and User’s accuracies were only marginally affected by the inclusion of height data (Table 3).
Producer’s and User’s accuracies at the dominant-taxon level varied between classes and sites (Table 4). For the majority of classes and sites, Producer’s and User’s accuracies increased when height data were included (Table 4), and the overall increases of segment- and area-based Producer’s and User’s accuracies were statistically significant (T and p values given in Table 5). The class of P. australis (emergent, present at Sites IV and V) showed the largest increase of Producer’s accuracy (Table 4). At Site IV, the reduced confusion of all taxa with P. australis resulted in increased User’s accuracies (≥12%) for all classes except for dense E. fluviatile which increased by only 1% (Table 4 and Table S1). Also at Site IV, the confusion between floating-leaved P. natans and emergent S. lacustris was reduced (Table S1). At Site V, mainly the confusion between floating-leaved Nymphaea/Nuphar spp. and emergent P. australis was reduced (Table S1). At Sites I and II, the confusion of emergent S. lacustris with all other taxa was reduced (Table S1). In the study based on spectral and textural features alone, the two E. fluviatile classes, P. natans and Sparganium spp., in all cases had a lower User’s accuracy than Producer’s accuracy, indicating a large number of false inclusions [2]. The inclusion of height data reduced most differences between the User’s and Producer’s accuracies, but User’s accuracies were still lower than Producer’s accuracies in all cases except for sparse E. fluviatile at Site III (Table 4). For the two classes that were most reliably classified without height data, Nymphaea/Nuphar spp. and S. lacustris, the inclusion of height data further increased both Producer’s and User’s accuracies at all sites (generally between 0.3% and 44%; Table 4).

4. Discussion

When height data from a UAS-DSM were included in the classification along with spectral and textural features from a UAS-orthoimage, the classification accuracy was higher than when spectral and textural features alone were used. The inclusion of height data successfully reduced the confusion between taxa that belonged to different growth forms (emergent and floating-leaved). In contrast to the study based on spectral and textural features alone [2], where emergent P. australis was confused with all other taxa, it now belonged to the most reliably classified taxa. P. australis was the tallest taxa in our classification [3]. It has a bright green colour (similar to Nymphaea/Nuphar spp., Sparganium spp., and sun exposed S. lacustris) which is easy to detect against a darker background of, for example, dark green vegetation or water.
In the previous study [2], the classification of S. lacustris was also problematic due to the high variation in spectral appearance within this taxon. Typically, at the edge of stands of S. lacustris, stems are bent and more exposed to the sun than those in the interior of the stands. This results in the former looking more brightly green than the majority of S. lacustris stems. In addition, we observed areas with low cover within S. lacustris stands, where stems were probably straighter than the majority of S. lacustris stems; these areas appeared darker. Therefore, in the classification based on spectral and textural features alone, S. lacustris was often confused with other taxa which decreased the User’s accuracy, especially of taxa that covered smaller total areas than the relatively common S. lacustris. With the inclusion of height data, the classification of S. lacustris was more reliable.
Compared to the other two emergent taxa, E. fluviatile was the least affected by the inclusion of height data. E. fluviatile had the lowest vegetation height among the emergent taxa in this study [3]. The two E. fluviatile classes were mainly confused with each other and S. lacustris (also emergent) but, especially at Site III it was also confused with Nymphaea/Nuphar spp. (floating-leaved). The inclusion of height data could not substantially reduce the confusion between these classes even though they belong to different growth forms. E. fluviatile has a straight, needle-like shape, and the fine leaves are also needle-like and were almost invisible on the UAS-images. At Site III, E. fluviatile rarely formed stands that were dense enough to completely obscure the water surface and formed to a large extent mixed vegetation stands with Nymphaea/Nuphar spp. This probably led to a larger variation in mean height values for E. fluviatile-covered segments at this site. In our previous study, we found that Site III had the highest proportion of misclassifications where a non-dominant taxon was wrongly classified as the dominant taxon. E. fluviatile was the only emergent taxa present at Site III, which explains the relatively small increase in classification accuracy at this test site.
The addition of height data also helped to reduce the confusion among emergent taxa, for example, between P. australis and S. lacustris (Sites IV and V), between P. australis and E. fluviatile (Site IV), and between S. lacustris and E. fluviatile (Sites I, II, and IV). This indicates that the UAS-DSM successfully resolved even smaller differences in vegetation height than between emergent and floating-leaved taxa.
The reduced confusion among growth forms and taxa resulted in very good classification accuracies (at least ~80%) at most test sites, both at the growth-form and the dominant-taxon level, including the site with the largest vegetation complexity. At Site V, the test site with low image quality, the overall classification accuracy increased, indicating that the inclusion of height data further increased the robustness of the classification process. These results show good potential for operative mapping and monitoring of aquatic vegetation at a high level of thematic detail and in an automated way, which is important to time-efficiently cover larger areas.
Conventional orthoimage production is often based on less dense point clouds from which objects with heights above ground level such as vegetation or buildings are removed. Our suggested method requires the production of true orthoimages with unfiltered dense point clouds (ca. >100 points per m2). This implies increased operator time and computational costs compared to conventional orthoimages. Thanks to recent developments in digital photogrammetry and computer vision, an increasing number of user-friendly software programmes for UAS-image processing are available. The calculation of 3D point clouds from overlapping UAS-images is usually integrated in the workflow of these programmes and is typically part of the production of a true UAS-orthoimage with high spatial accuracy. The height information from image-based dense point clouds generated with Structure-from-Motion software based on algorithms such as Semi-Global Matching [27] can be comparable to that from airborne LiDAR [28] and even low-cost systems such as a UAS equipped with a consumer grade camera allow for surface reconstructions of high quality when the image overlap is high [29]. Once a dense point cloud is constructed, the production of a UAS-DSM is only a minor effort. Regarding the high increase in accuracy achieved in our study, UAS-DSMs have a large potential to efficiently improve the classification results when combined with spectral data.
For our study, UAS-DSMs for the test sites were produced after the original UAS-orthoimage production and independent from the orthoimage. This was done because the original UAS-orthoimage was produced by an external image processing company and the point cloud was not initially made available for our study. The independent production of the UAS-orthoimage and UAS-DSMs resulted in a spatial displacement between the two, which is a potential source of error in our classification. The spatial displacement varied from site to site but had an average of 4.5 cm, and a maximum of 5.6 cm, which is about the size of one pixel (5 cm). For future studies, we recommend producing the UAS-DSM together with the UAS-orthoimage to avoid differences in data co-registration.
Ground control points with height measurements were missing in our study. This was solved by using the water surface as the zero level. In our study, the internal relative differences in vegetation height were most important since our focus was on the classification of different growth-forms with relatively large differences in vegetation height. The maximum vegetation heights in the UAS-DSMs of the five test sites are plausible. However, when the UAS-DSM is to be used for the extraction of absolute height values, for example, for biomass estimation, height measurements of reference points in the field are recommended for calibration.

5. Conclusions

At five test sites with varying vegetation complexity and image quality at a lake in northern Sweden, the inclusion of height data in addition to spectral and textural features from a true-colour UAS-orthoimage with 5-cm pixels significantly increased the classification accuracy of non-submerged aquatic vegetation as compared to using spectral and textural features alone. The inclusion of height data reduced the confusion between taxa that appeared similar in the image but belonged to different growth forms (emergent and floating-leaved), as well as among emergent taxa. The overall classification accuracy increased to at least ~80% at all five test sites for growth forms and at four out of five test sites for dominant taxa. The combination of true-colour UAS-orthoimages with height data derived from dense 3D point clouds from image matching has a large potential to efficiently increase the accuracy of automated classification. This approach improves the possibilities for operative mapping and monitoring of non-submerged aquatic vegetation at a high level of thematic detail covering larger areas such as entire lakes.

Supplementary Materials

The following are available online at www.mdpi.com/2072-4292/9/3/247/s1, Figure S1: Field images of taxa present at Ostträsket, Table S1: Error matrices for all test sites at all levels of thematic detail and differences between error matrices from the classifications with and without height data.

Acknowledgments

We thank Jonas Bohlin and Olle Hagner for technical support and advice regarding DSM production. This work was funded by the Swedish Research Council Formas (reference number 2014-1425), and was conducted as part of the European project ImpactMin (project number 244166 in FP7-ENV-2009-1) as well as the research programme WATERS, funded by the Swedish Environmental Protection Agency (Dnr 10/179) and Swedish Agency for Marine and Water Management.

Author Contributions

All authors conceived and designed the study; E.H. derived the digital surface models and analysed the image data; E.H. wrote the paper with the contributions of H.R. and F.E.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this article:
DSMDigital surface model
LiDARLight detection and ranging
OBIAObject-based image analysis
PAMSPersonal Aerial Mapping System
RGBRed green blue
3DThree-dimensional
UASUnmanned aircraft system

References

  1. Anderson, K.; Gaston, K.J. Lightweight unmanned aerial vehicles will revolutionize spatial ecology. Front. Ecol. Environ. 2013, 11, 138–146. [Google Scholar] [CrossRef]
  2. Husson, E.; Ecke, F.; Reese, H. Comparison of manual mapping and automated object-based image analysis of non-submerged aquatic vegetation from very-high-resolution UAS images. Remote Sens. 2016, 8, 724. [Google Scholar] [CrossRef]
  3. Mossberg, B.; Stenberg, L. Den Nya Nordiska Floran; Wahlström & Widstrand: Aurskog, Norge, 2006. [Google Scholar]
  4. Tempfli, K.; Kerle, N.; Huurneman, G.C.; Janssen, L.L.E. Principles of Remote Sensing; ITC: Enschede, The Netherlands, 2009. [Google Scholar]
  5. Colwell, R. Manual of Photographic Interpretation; American Society of Photogrammetry: Washington, DC, USA, 1960. [Google Scholar]
  6. Madden, M.; Jordan, T.; Bernardes, S.; Cotten, D.L.; O’Hare, N.; Pasqua, A. Unmanned aerial systems and structure from motion revolutionize wetlands mapping. In Wetlands: Applications and Advances; Tiner, R.W., Lang, M.W., Klemas, V.V., Eds.; CRC Press: Boca Raton, FL, USA, 2015; pp. 195–219. [Google Scholar]
  7. Colomina, I.; Molina, P. Unmanned aerial systems for photogrammetry and remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2014, 92, 79–97. [Google Scholar] [CrossRef]
  8. Cunliffe, A.M.; Brazier, R.E.; Anderson, K. Ultra-fine grain landscape-scale quantification of dryland vegetation structure with drone-acquired structure-from-motion photogrammetry. Remote Sens. Environ. 2016, 183, 129–143. [Google Scholar] [CrossRef] [Green Version]
  9. Ke, Y.H.; Quackenbush, L.J.; Im, J. Synergistic use of Quickbird multispectral imagery and Lidar data for object-based forest species classification. Remote Sens. Environ. 2010, 114, 1141–1154. [Google Scholar] [CrossRef]
  10. Nordkvist, K.; Granholm, A.-H.; Holmgren, J.; Olsson, H.; Nilsson, M. Combining optical satellite data and airborne laser scanner data for vegetation classification. Remote Sens. Lett. 2012, 3, 393–401. [Google Scholar] [CrossRef]
  11. Granholm, A.H.; Olsson, H.; Nilsson, M.; Allard, A.; Holmgren, J. The potential of digital surface models based on aerial images for automated vegetation mapping. Int. J. Remote Sens. 2015, 36, 1855–1870. [Google Scholar] [CrossRef]
  12. Reese, H.; Nordkvist, K.; Nyström, M.; Bohlin, J.; Olsson, H. Combining point clouds from image matching with Spot 5 multispectral data for mountain vegetation classification. Int. J. Remote Sens. 2015, 36, 403–416. [Google Scholar] [CrossRef]
  13. Reese, H.; Nyström, M.; Nordkvist, K.; Olsson, H. Combining airborne laser scanning data and optical satellite data for classification of alpine vegetation. Int. J. Appl. Earth Obs. Geoinf. 2014, 27, 81–90. [Google Scholar] [CrossRef]
  14. Gillan, J.K.; Karl, J.W.; Duniway, M.; Elaksher, A. Modeling vegetation heights from high resolution stereo aerial photography: An application for broad-scale rangeland monitoring. J. Environ. Manag. 2014, 144, 226–235. [Google Scholar] [CrossRef] [PubMed]
  15. Rampi, L.P.; Knight, J.F.; Pelletier, K.C. Wetland mapping in the upper midwest United States: An object-based approach integrating Lidar and imagery data. Photogramm. Eng. Remote Sens. 2014, 80, 439–448. [Google Scholar] [CrossRef]
  16. Al-Rawabdeh, A.; He, F.N.; Moussa, A.; El-Sheimy, N.; Habib, A. Using an unmanned aerial vehicle-based digital imaging system to derive a 3D point cloud for landslide scarp recognition. Remote Sens. 2016, 8, 95. [Google Scholar] [CrossRef]
  17. Vetrivel, A.; Gerke, M.; Kerle, N.; Vosselman, G. Segmentation of UAV-based images incorporating 3D point cloud information. In Proceedings of the Joint ISPRS Conference on Photogrammetric Image Analysis (PIA) and High Resolution Earth Imaging for Geospatial Information (HRIGI), Munich, Germany, 25–27 March 2015.
  18. Lechner, A.M.; Fletcher, A.; Johansen, K.; Erskine, P. Characterising upland swamps using object-based classification methods and hyper-spatial resolution imagery derived from an umanned aerial vehicle. In Proceedings of the XXII ISPRS Congress, Melbourne, Australia, 25 August–1 September 2012.
  19. Kuria, D.N.; Menz, G.; Misana, S.; Mwita, E.; Thamm, H.; Alvarez, M.; Mogha, N.; Becker, M.; Oyieke, H. Seasonal vegetation changes in the Malinda wetland using bi-temporal, multi-sensor, very high resolution remote sensing data sets. Adv. Remote Sens. 2014, 3, 33–48. [Google Scholar] [CrossRef]
  20. Tamminga, A.; Hugenholtz, C.; Eaton, B.; Lapointe, M. Hyperspatial remote sensing of channel reach morphology and hydraulic fish habitat using an unmanned aerial vehicle (UAV): A first assessment in the context of river research and management. River Res. Appl. 2015, 31, 379–391. [Google Scholar] [CrossRef]
  21. Boon, M.A.; Greenfield, R.; Tesfamichael, S. Wetland assessment using unmanned aerial vehicle (UAV) photogrammetry. In Proceedings of the XXIII ISPRS Congress, Prague, Czech Republic, 12–19 July 2016.
  22. Congalton, R.G. A review of assessing the accuracy of classifications of remotely sensed data. Remote Sens. Environ. 1991, 37, 35–46. [Google Scholar] [CrossRef]
  23. Pontius, R.G.; Millones, M. Death to kappa: Birth of quantity disagreement and allocation disagreement for accuracy assessment. Int. J. Remote Sens. 2011, 32, 4407–4429. [Google Scholar] [CrossRef]
  24. Olofsson, P.; Foody, G.M.; Herold, M.; Stehman, S.V.; Woodcock, C.E.; Wulder, M.A. Good practices for estimating area and assessing accuracy of land change. Remote Sens. Environ. 2014, 148, 42–57. [Google Scholar] [CrossRef]
  25. Radoux, J.; Bogaert, P.; Fasbender, D.; Defourny, P. Thematic accuracy assessment of geographic object-based image classification. Int. J. Geogr. Inf. Sci. 2011, 25, 895–911. [Google Scholar] [CrossRef]
  26. Zar, J.H. Biostatistical Analysis; Prentice-Hall Inc.: Upper Saddle River, NJ, USA, 1999. [Google Scholar]
  27. Hirschmüller, H. Stereo processing by semi-global matching and mutual information. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 328–341. [Google Scholar] [CrossRef] [PubMed]
  28. Leberl, F.; Irschara, A.; Pock, T.; Meixner, P.; Gruber, M.; Scholz, S.; Wiechert, A. Point clouds: Lidar versus 3D vision. Photogramm. Eng. Remote Sens. 2010, 76, 1123–1134. [Google Scholar] [CrossRef]
  29. Haala, N.; Cramer, M.; Rothermel, M. Quality of 3D point clouds from highly overlapping UAV imagery. In Proceedings of the Conference on Unmanned Aerial Vehicles in Geomatics (UAV-g), Rostock, Germany, 4–6 September 2013.
Figure 1. Examples of produced digital surface models for test site I (a) and test site IV (b).
Figure 1. Examples of produced digital surface models for test site I (a) and test site IV (b).
Remotesensing 09 00247 g001
Figure 2. True-colour orthoimage and dominant-taxon classification based on height data combined with spectral and textural features of test sites I (a and b), II (c and d), IV (e and f), and V (g and h).
Figure 2. True-colour orthoimage and dominant-taxon classification based on height data combined with spectral and textural features of test sites I (a and b), II (c and d), IV (e and f), and V (g and h).
Remotesensing 09 00247 g002
Table 1. Taxa present at each test site given in order of decreasing area in which the respective taxon was dominant (i.e., it was the taxon with the highest % cover).
Table 1. Taxa present at each test site given in order of decreasing area in which the respective taxon was dominant (i.e., it was the taxon with the highest % cover).
SiteTaxa
ISchoenoplectus lacustris, Nymphaea/Nuphar spp., Equisetum fluviatile (dense), E. fluviatile (sparse), Potamogeton natans, Sparganium spp.
IIS. lacustris, Nymphaea/Nuphar spp., P. natans, Sparganium spp., E. fluviatile (sparse), E. fluviatile (dense)
IIINymphaea/Nuphar spp., E. fluviatile (sparse), E. fluviatile (dense), P. natans, Sparganium spp.
IVPhragmites australis, S. lacustris, P. natans, Nymphaea/Nuphar spp., E. fluviatile (dense), E. fluviatile (sparse)
VS. lacustris, P. natans, P. australis, Nymphaea/Nuphar spp.
Table 2. Overall classification accuracy (%, segment-, and area-based), Cohen’s Kappa coefficient, overall quantity disagreement (%), and overall allocation disagreement (%) for the classification based on height data combined with spectral and textural features for test sites I–V presented by the level of thematic detail (growth forms and dominant taxa). For comparison, the results from our previous study [2] based on spectral and textural features alone were reproduced in this table. Bold numbers indicate a statistically significant difference with the inclusion of height data (N = 5, α < 0.05).
Table 2. Overall classification accuracy (%, segment-, and area-based), Cohen’s Kappa coefficient, overall quantity disagreement (%), and overall allocation disagreement (%) for the classification based on height data combined with spectral and textural features for test sites I–V presented by the level of thematic detail (growth forms and dominant taxa). For comparison, the results from our previous study [2] based on spectral and textural features alone were reproduced in this table. Bold numbers indicate a statistically significant difference with the inclusion of height data (N = 5, α < 0.05).
Level of Thematic DetailGrowth FormsDominant Taxa
SiteIIIIIIIVVIIIIIIIVV
Overall accuracy (segment-based)
Height, spectral, and textural features 84.687.178.977.778.079.181.166.481.985.4
Spectral and textural features80.383.474.356.363.468.664.763.052.265.4
Difference4.33.74.621.414.610.516.43.429.720.0
Overall accuracy (area-based)
Height, spectral, and textural features 94.792.982.376.674.382.785.363.282.186.2
Spectral and textural features93.691.677.856.464.872.770.361.051.874.6
Difference1.01.24.520.29.59.915.02.330.211.7
Cohen’s Kappa coefficient
Height, spectral, and textural features0.750.760.580.540.640.680.670.380.740.81
Spectral and textural features0.690.690.510.250.400.540.460.340.390.54
Difference0.070.070.070.290.240.140.210.040.350.27
Overall quantity disagreement
Height, spectral, and textural features6.96.911.118.611.410.710.820.713.56.0
Spectral and textural features8.07.414.636.310.914.623.122.730.82.9
Difference−1.1−0.6−3.4−17.70.6−3.9−12.2−2.0−17.33.1
Overall allocation disagreement
Height, spectral, and textural features8.66.010.03.710.610.28.112.94.78.6
Spectral and textural features11.79.111.17.325.716.812.214.317.031.7
Difference−3.1−3.1−1.1−3.7−15.1−6.6−4.2−1.4−12.4−23.1
Table 3. Producer’s and User’s accuracy (%, segment-, and area-based), number of validation segments, and total area of validation segments (m2) for the classification of growth-forms based on height data combined with spectral and textural features presented by test site and class: water (W), nymphaeid (N), and helophyte (H). For comparison, the results from our previous study [2] based on spectral and textural features alone were reproduced in this table. Bold numbers indicate a statistically significant increase when height data were included and stars indicate the level of significance (* for α < 0.025, ** for α < 0.005, and *** for α < 0.0005). Valid N is the total number of differences with either a positive or negative sign.
Table 3. Producer’s and User’s accuracy (%, segment-, and area-based), number of validation segments, and total area of validation segments (m2) for the classification of growth-forms based on height data combined with spectral and textural features presented by test site and class: water (W), nymphaeid (N), and helophyte (H). For comparison, the results from our previous study [2] based on spectral and textural features alone were reproduced in this table. Bold numbers indicate a statistically significant increase when height data were included and stars indicate the level of significance (* for α < 0.025, ** for α < 0.005, and *** for α < 0.0005). Valid N is the total number of differences with either a positive or negative sign.
Site ISite IISite IIISite IVSite VWilcoxon Signed-Rank Test
WNHWNHWNHWNHWNHValid NTp
Producer’s accuracy (segment-based)
Height, spectral, and textural features8388848968956583759090749388651200.002 **
Spectral and textural features818279895992637869908249936853
User’s accuracy (segment-based)
Height, spectral, and textural features98658886948683934325589838888714110.009 *
Spectral and textural features975884898482859235253392397463
No. of validation segments1066817657872065224652107127428163159
Producer’s accuracy (area-based)
Height, spectral, and textural features9787899663969074809289739686621200.002 **
Spectral and textural features978387965496897068927749966956
User’s accuracy (area-based)
Height, spectral, and textural features9964919694909093583557983293881580.003 **
Spectral and textural features995989978988879249353092348073
Total area of validation segments458371771623917614512166184220130120168
Table 4. Producer’s and User’s accuracy (%, segment-, and area-based), number of validation segments, and total area of validation segments (m2) for the classification of dominant taxa based on height data combined with spectral and textural features for Sites I–V presented by vegetation class. For comparison, the results from our previous study [2] based on spectral and textural features alone were reproduced in this table. Vegetation classes have been abbreviated as follows: Equisetum fluviatile (E.f.), Nymphaea/Nuphar spp. (N./N. spp.), Phragmites australis (P.a.), Potamogeton natans (P.n.), Schoenoplectus lacustris (S.l.), and Sparganium spp. (S. spp.). Bold and italic numbers indicate an increase of ≥5% when height data were included.
Table 4. Producer’s and User’s accuracy (%, segment-, and area-based), number of validation segments, and total area of validation segments (m2) for the classification of dominant taxa based on height data combined with spectral and textural features for Sites I–V presented by vegetation class. For comparison, the results from our previous study [2] based on spectral and textural features alone were reproduced in this table. Vegetation classes have been abbreviated as follows: Equisetum fluviatile (E.f.), Nymphaea/Nuphar spp. (N./N. spp.), Phragmites australis (P.a.), Potamogeton natans (P.n.), Schoenoplectus lacustris (S.l.), and Sparganium spp. (S. spp.). Bold and italic numbers indicate an increase of ≥5% when height data were included.
Vegetation Class
Sparse E.f.Dense E.f.N./N. spp.P.a.P.n.S.l.S. spp.
Site I
Producer’s accuracy (segment-based)
Height, spectral, and textural features605683 808350
Spectral and textural features604879 806840
User’s accuracy (segment-based)
Height, spectral, and textural features454493 539716
Spectral and textural features413378 479211
No. of validation segments152795 1020610
Producer’s accuracy (area-based)
Height, spectral, and textural features764386 918748
Spectral and textural features764183 917431
User’s accuracy (area-based)
Height, spectral, and textural features384991 729813
Spectral and textural features333371 66958
Total area of validation segments122450 102145
Site II
Producer’s accuracy (segment-based)
Height, spectral, and textural features508071 209070
Spectral and textural features507070 206570
User’s accuracy (segment-based)
Height, spectral, and textural features214793 119735
Spectral and textural features122984 69227
No. of validation segments101090 1023010
Producer’s accuracy (area-based)
Height, spectral, and textural features636173 139172
Spectral and textural features635971 137272
User’s accuracy (area-based)
Height, spectral, and textural features264489 59933
Spectral and textural features151881 39623
Total area of validation segments7440 42055
Site III
Producer’s accuracy (segment-based)
Height, spectral, and textural features505970 50 70
Spectral and textural features485966 50 70
User’s accuracy (segment-based)
Height, spectral, and textural features542895 13 30
Spectral and textural features442494 13 32
No. of validation segments4022273 12 10
Producer’s accuracy (area-based)
Height, spectral, and textural features647161 72 67
Spectral and textural features627157 72 67
User’s accuracy (area-based)
Height, spectral, and textural features803795 18 36
Spectral and textural features753295 20 38
Total area of validation segments6122131 11 7
Site IV
Producer’s accuracy (segment-based)
Height, spectral, and textural features9020661009459
Spectral and textural features804061488843
User’s accuracy (segment-based)
Height, spectral, and textural features301087937997
Spectral and textural features17968813869
No. of validation segments10104117033100
Producer’s accuracy (area-based)
Height, spectral, and textural features9813591009674
Spectral and textural features922356438951
User’s accuracy (area-based)
Height, spectral, and textural features321282958296
Spectral and textural features191170833477
Total area of validation segments91222952297
Site V
Producer’s accuracy (segment-based)
Height, spectral, and textural features 951007674
Spectral and textural features 55537873
User’s accuracy (segment-based)
Height, spectral, and textural features 83898584
Spectral and textural features 49537980
No. of validation segments 78859691
Producer’s accuracy (area-based)
Height, spectral, and textural features 931007785
Spectral and textural features 49627984
User’s accuracy (area-based)
Height, spectral, and textural features 82838789
Spectral and textural features 53577787
Total area of validation segments 415189134
Table 5. Valid N-, T-, and p-values from one-sided Wilcoxon matched pairs signed-rank tests for Producer’s and User’s accuracy (segment- and area-based) at the dominant taxon level for all classes and sites combined. Bold numbers indicate a statistically significant increase when height data were included and stars indicate the level of significance (* for α < 0.025, ** for α < 0.005, and *** for α < 0.0005). Valid N is the total number of differences with either a positive or negative sign.
Table 5. Valid N-, T-, and p-values from one-sided Wilcoxon matched pairs signed-rank tests for Producer’s and User’s accuracy (segment- and area-based) at the dominant taxon level for all classes and sites combined. Bold numbers indicate a statistically significant increase when height data were included and stars indicate the level of significance (* for α < 0.025, ** for α < 0.005, and *** for α < 0.0005). Valid N is the total number of differences with either a positive or negative sign.
Valid NTp
Producer’s accuracy (segment-based)19180.00194 **
User’s accuracy (segment-based)2750.00001 ***
Producer’s accuracy (area-based)20200.00151 **
User’s accuracy (area-based)2780.00001 ***

Share and Cite

MDPI and ACS Style

Husson, E.; Reese, H.; Ecke, F. Combining Spectral Data and a DSM from UAS-Images for Improved Classification of Non-Submerged Aquatic Vegetation. Remote Sens. 2017, 9, 247. https://doi.org/10.3390/rs9030247

AMA Style

Husson E, Reese H, Ecke F. Combining Spectral Data and a DSM from UAS-Images for Improved Classification of Non-Submerged Aquatic Vegetation. Remote Sensing. 2017; 9(3):247. https://doi.org/10.3390/rs9030247

Chicago/Turabian Style

Husson, Eva, Heather Reese, and Frauke Ecke. 2017. "Combining Spectral Data and a DSM from UAS-Images for Improved Classification of Non-Submerged Aquatic Vegetation" Remote Sensing 9, no. 3: 247. https://doi.org/10.3390/rs9030247

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop