Next Article in Journal
Costs of Lost opportunities: Applying Non-Market Valuation Techniques to Potential REDD+ Participants in Cameroon
Previous Article in Journal
Early REDD+ Implementation: The Journey of an Indigenous Community in Eastern Panama
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Structure from Motion (SfM) Photogrammetry with Drone Data: A Low Cost Method for Monitoring Greenhouse Gas Emissions from Forests in Developing Countries

1
Department of Geoinformatics and Surveying, University of Zimbabwe, P.O. Box MP 167 Mount Pleasant, Harare, Zimbabwe
2
School of GeoSciences, University of Edinburgh, Drummond Street, Edinburgh EH8 9XP, UK
3
Centre for Ecology and Hydrology, Maclean Building, Benson Lane, Crowmarsh Gifford, Wallingford, Oxfordshire OX10 8BB, UK
4
Environment and Sustainability Institute, University of Exeter, Penryn, Cornwall TR10 9FE, UK
*
Author to whom correspondence should be addressed.
Forests 2017, 8(3), 68; https://doi.org/10.3390/f8030068
Submission received: 21 December 2016 / Revised: 17 February 2017 / Accepted: 22 February 2017 / Published: 3 March 2017

Abstract

:
Structure from Motion (SfM) photogrammetry applied to photographs captured from Unmanned Aerial Vehicle (UAV) platforms is increasingly being utilised for a wide range of applications including structural characterisation of forests. The aim of this study was to undertake a first evaluation of whether SfM from UAVs has potential as a low cost method for forest monitoring within developing countries in the context of Reducing Emissions from Deforestation and forest Degradation (REDD+). The project evaluated SfM horizontal and vertical accuracy for measuring the height of individual trees. Aerial image data were collected for two test sites; Meshaw (Devon, UK) and Dryden (Scotland, UK) using a Quest QPOD fixed wing UAV and DJI Phantom 2 quadcopter UAV, respectively. Comparisons were made between SfM and airborne LiDAR point clouds and surface models at the Meshaw site, while at Dryden, SfM tree heights were compared to ground measured tree heights. Results obtained showed a strong correlation between SfM and LiDAR digital surface models (R2 = 0.89) and canopy height models (R2 = 0.75). However, at Dryden, a poor correlation was observed between SfM tree heights and ground measured heights (R2 = 0.19). The poor results at Dryden were explained by the fact that the forest plot had a closed canopy structure such that SfM failed to generate enough below-canopy ground points. Finally, an evaluation of UAV surveying methods was also undertaken to determine their usefulness and cost-effectiveness for plot-level forest monitoring. The study concluded that although SfM from UAVs performs poorly in closed canopies, it can still provide a low cost solution in those developing countries where forests have sparse canopy cover (<50%) with individual tree crowns and ground surfaces well-captured by SfM photogrammetry. Since more than half of the forest covered areas of the world have canopy cover <50%, we can conclude that SfM has enormous potential for forest mapping in developing countries.

1. Introduction

Although a significant decrease in the global deforestation rate has been noted in the last decade, in many developing nations, the deforestation rate still remains very high [1]. Reducing greenhouse gas (GHG) emissions from deforestation has been long identified as having the greatest potential for global climate change mitigation [2]. This is particularly the case in developing tropical countries where the largest source of GHG emissions are attributed to land use change from forest loss [3]. In 2005, the United Nations Framework Convention on Climate Change (UNFCCC) initiated a process to investigate how the concept of Reducing Emissions from Deforestation and forest Degradation (REDD) could help combat the challenge of climate change due to GHG emissions in forest-rich developing countries [4]. Through REDD, countries should gain economic incentives for demonstrating quantifiable carbon emission reductions from protecting their forests [5]. However, fundamental to the success of REDD is the availability of robust and consistent methodologies for monitoring, reporting and verification (MRVs) so that the incentives paid out can be evidence-based, and linked directly to the amount of carbon emission reduction [3]. As initiatives for REDD in tropical countries continue to develop, the need for a forest monitoring system that is both low cost and accurate is imperative, especially for many developing countries where funding for such forest monitoring activities may not always be readily available.
Existing low-cost alternatives to field-based methods for monitoring forest cover and change rely on satellite observations. Landsat multispectral imagery provides a zero-cost data source that offers the capacity to straightforwardly map forest cover and forest cover change (e.g., [6]). However, this passive imagery is not able to determine carbon stock with sufficient accuracy [7]. Radar observations can estimate carbon stock and change, but are challenging to process effectively. For areas of semi-arid sparse woodland, both of these methods are limited due to the spatial resolution of freely available data—to achieve higher spatial mapping accuracies would require data that are more expensive and less readily available.
Structure from Motion (SfM) photogrammetry using digital cameras on small, low-cost Unmanned Aerial Vehicles (UAVs) is therefore a potential cost-effective alternative for areas of woodland where the woody cover is sparse or patchy (such as many of the semi-arid savanna landscapes of sub-Saharan Africa). SfM has emerged recently as an inexpensive method for extracting the 3D structure of a scene from multiple overlapping photographs using bundle adjustment procedures [8]. The ability of SfM to generate high quality 3D point clouds similar to the ones generated from Aerial Laser Scanning (ALS) is now widely understood and has been demonstrated in a number of studies [9,10,11]. Its potential to characterise forest structures has long been realised but has been hampered by difficulties in the SfM algorithms to accurately perform image matching in densely vegetated areas [12,13]. Until recent developments in 3D Vision (namely the Graphics Processing Unit (GPU) and parallel computing), complex image matching algorithms used in SfM were deemed impractical [14]. The upsurge in the use of UAVs within the environmental sciences has also made it practical to acquire highly redundant, fine spatial resolution (>5 megapixels) aerial photographs with a large overlap (>80%) at low cost. A range of studies has demonstrated how SfM photogrammetry can be used to generate accurate Digital Surface Models (DSMs) over canopies using high resolution images from consumer-grade cameras mounted on these low altitude platforms [15,16,17]. The utility of SfM was successfully demonstrated in [18] with high resolution images obtained from a kite platform for estimating Above Ground Biomass (AGB) at the plot level. In [19] UAV imagery was used to generate dense point clouds over forested areas to demonstrate a low cost alternative to LiDAR, and in [20], the authors created ‘hybrid’ Canopy Height Models (CHMs) from SfM DSMs and LiDAR Digital Elevation Models (DEMs) which were comparable to LiDAR CHMs. However most of the previous studies reported poor performance in a closed canopy.
This paper focusses on evaluating the capability of SfM photogrammetry applied to aerial photography data from small (<20 kg mass) Unmanned Aerial Vehicles (Figure 1) as a potential low cost solution for REDD monitoring within developing countries. The success of SfM is governed by image resolution (which in turn depends on the quality of the camera and lens used), degree of image overlap as well as relative motion of the camera with respect to the scene [21]. This makes small UAVs the ideal platform for SfM because they operate at distances of only a few tens of meters above the ground, providing data with sub-decimetric spatial resolution—orders of magnitude finer than space-borne sensors with the capability to resolve individual trees and plants for biomass estimation [22]. Recently, there has been an increase in the number of studies looking at developing low cost UAVs for forestry applications e.g., [23] developed a conservation drone for <US$2000. While UAVs do not offer global or national-level coverage, as do satellites or large aircraft, they are generally considered cheaper to use than airborne platforms when focused over comparatively small areas. Their portability and ease of use also allow the user to carry out surveys as per local user requirements, thereby offering better temporal resolution than most platforms.
Remote Sensing (RS) had been previously identified as a possible solution for REDD monitoring because of its potential to be low cost and to provide global coverage making it cost-effective at the national scale for many forest countries [27]. Typically, RS techniques for quantifying AGB use one of two main methods: (i) deriving a statistical relationship of AGB and a remote sensing variable or (ii) by assigning typical AGB values to different land cover classifications [28]. In Optical RS, vegetation indices such as the Normalised Difference Vegetation Index (NDVI) have been successfully used to infer carbon stocks on a global scale using regression-based models (e.g., [29]). Although the data used in most of these studies are free (e.g., Landsat) or of high temporal resolution (e.g., MODIS), they are also at a low spatial resolution (usually ≥ 30 m) which makes them unsuitable for estimating AGB at finer scales e.g., at plot level [30]. The higher resolution data (e.g., IKONOS) are very expensive and not cost-effective for small projects, and are not always readily available for all areas [31]. Data acquisition for space-borne optical sensors is also subject to cloud cover and illumination [32]. Synthetic Aperture Radar (SAR) systems have successfully used radar backscatter to infer biomass and to extract forest metrics and species types [33,34]. SAR sensors do not rely on solar illumination and are not affected by cloud cover [35]. However, the data are technically challenging to process and generally SAR does not perform well in dense canopies due to early saturation issues [36]. Temporal resolution is also an issue with SAR, as with most space-borne techniques, since the end-user has no control over the time periods for data acquisition. Airborne LiDAR has now become the method of choice for forestry applications because of its ability to generate 3D point clouds with centimeter accuracy [37]. The point clouds allow for the extraction of forest metrics (e.g., canopy height) which have been used extensively to infer forest biomass at both plot level and individual tree level (e.g., [38]). For biomass studies, LiDAR is now considered superior to optical sensors because it can penetrate through the woody canopy to better establish the terrain surface, and it is not affected by solar illumination, cloud cover (when clouds are high enough to fly below) or cloud shadow [39]. However LiDAR data are very expensive and not very cost-effective for small applications as they require mobilisation of an airborne platform that is not always geographically close to the forested areas. Repeating a LiDAR survey on a regular basis to achieve a suitable temporal resolution is thus not an option for many users of LiDAR data [40]. While LiDAR systems are becoming smaller and more compact, they are still orders of magnitude more expensive than small scale UAVs with a digital camera.
Thus, the main challenges for most RS solutions that hamper REDD monitoring for developing countries are associated with cost, temporal resolution and spatial resolution. UAVs and SfM have a strong potential for offering a local solution which addresses most, if not all, of the identified challenges.
This study therefore had two major aims: to evaluate the output from SfM photogrammetry applied to UAV data for estimating tree height by (i) comparing SfM derivatives (i.e., point clouds, Digital Elevation Models (DEMs) and Canopy Height Models (CHMs)) with corresponding LiDAR derivatives under open canopy conditions and (ii) comparing SfM derived CHMs with ground measured tree heights under closed canopy conditions. This research aims to address the literature gap pertaining to the application of SfM from UAV aerial photography as a potential low cost solution for REDD monitoring for developing countries.

2. Materials and Methods

2.1. Test Site and Data Collection

Two test sites were chosen for this study: Meshaw, Devon, UK (50°57′2″ N 3°46′09″ W) and the University of Edinburgh’s Dryden Farm, Scotland, UK (55°51′40″ N 3°09′00″ W) (Figure 2). The site at Meshaw, Devon is located in a relatively topographically flat agricultural area, with fields surrounded by trees that exhibit a sparse canopy structure. This site was used to evaluate the SfM photogrammetry approach in open canopy situations. The Dryden site, located near Roslin in Midlothian, is a small forest plot (2.3 ha) comprising dominant Sycamore (Acer pseudoplatanus) and Scots pine (Pinus sylvestris) trees. Compared to the Meshaw site, the Dryden site had a dense canopy structure making it suitable for evaluating the performance of SfM photogrammetry in closed canopies.
Individual tree heights were measured in situ for comparison with SfM photogrammetry only at Dryden. Small circular plates were placed around the site to serve as Ground Control Points (GCPs) to later geo-reference the SfM models. Seventeen GCP coordinates were measured using a Trimble GeoXR Differential GPS and later post-processed in RTKPost [41]. Heights and locations of 62 randomly selected trees were measured using a Vertex II Forester Hypsometer and GPS, respectively, between the 2nd and 3rd of July 2015. Tree locations were later matched manually using the site ortho-photograph. In addition to tree heights, complementary measurements of diameter at breast height (DBH) were also performed for each tree at 1.3 m from the ground.

2.2. LiDAR Data

The LiDAR data used in this study were provided by the Tellus South West Project [42]—a collaborative research project involving the Natural Environment Research Council, British Geological Survey, British Antarctic Survey, Centre for Ecology and Hydrology and the University of Exeter. The data were acquired during the ‘leaf-on’ season between July and August 2013 using an Optech ALTM 3100 EA laser scanner mounted on a BAS Twin Otter aircraft. The survey comprised 26 flight lines over an area of 9424 km2 covering Cornwall and Devon with a planned overlap of 300 m between the swaths. The point density was 1 hit per m2 and vertical accuracy was 25 cm. The dataset was processed using Terrascan Software (Terrasolid Ltd, Helsinki, Finland) to derive 1 m resolution DTM [43] and DSM [44].

2.3. UAV Surveys

At Meshaw, UAV aerial photography was acquired using an ungimballed SONY NEX-7 24.3 megapixel camera (Sony Corporation, Tokyo, Japan) mounted (ungimaballed fixings improve ‘motion’ in the image orientations [45]), on a fixed wing Quest QPod UAV in a series of autonomous missions flown in May 2015. The data were acquired as part of a separate study and a subset of 111 images were captured as the fixed wing UAV flew a parallel strip flight pattern over the site at 100 m elevation. UAV flight logs for the mission were also provided, describing camera trigger points and flight attitude data during the survey. For the Meshaw site, no ground measured GCPs were available so we identified natural landmarks from the aerial mosaic and extracted their 3D coordinates from the LiDAR DSM [18]. These consisted of building corners and road intersections which were clearly visible in the ortho-photograph.
A GoPro HERO 3+ Black 12.0 megapixel camera mounted on a DJI Phantom 2 quadcopter was used at Dryden. The Phantom flew 2 full autonomous missions at different altitudes (Table 1) in a parallel strip pattern in order to achieve adequate and consistent ground coverage [46]. The GoPro camera was configured to capture images at 0.5 s intervals in order to achieve an overlap of >80% (both end lap and side lap) [21]. Mission planning was done using the PC Ground Station mission planning software [47] and involved defining an area of interest (AOI) over a Google Maps image and specifying the flying altitude and speed.

2.4. Structure from Motion and Multi-View Stereo Reconstruction

Images acquired at Dryden were manually selected by visual assessment in order to remove bad images (i.e., blurred images or those captured outside the AOI). Further selection was done such that the images conformed to a 5 s interval which reduced the number of images from 999 down to 123 but theoretically maintained an 80% overlap. This was done in order to reduce the SfM/MVS processing time. To maintain the ambition of low-cost operation, open source SfM photogrammetry software (VisualSFM v0.5.25 (Changchang Wu, Seattle, WA, USA) [48] and CMP-MVS v0.5 (Michal Jancosek, Prague, Czech Republic) [49] for the SfM/MVS process) was used to process the data. In addition, prior to uploading into VisualSFM, the pixel resolution of photographs from both sites needed to be reduced (1280 × 960 pixels and 1200 × 900 pixels for Meshaw and Dryden, respectively) in order to match the default maximum dimension threshold for VisualSFM [48] which also cuts down on the processing time [8]. The SfM workflow in VisualSFM comprised of 4 main processes: (i) detecting and matching distinct features from overlapping images; (ii) generating sparse point clouds; (iii) clustering the sparse point cloud and (iv) densifying the sparse point cloud [50]. These processes were executed using 3 different algorithms, namely SiftGPU [51,52], Clustering View for Multi-view Stereo (CMVS) and Patch-based Multi-view Stereo (PMVS2) [53], all of which are packaged as part of VisualSFM. The above sequence of processes produced dense point clouds which were then exported to the Polygon (ply) file format for further processing in CMP-MVS. Multi-view Stereo scene reconstruction in CMP-MVS then generated an aerial mosaic and a mesh. A summary of the SfM/MVS processing steps is shown in Figure 3.

2.5. Geo-Referencing

All products of the SfM and MVS processes were in arbitrary coordinate systems and had to be registered onto the same coordinate system as the LiDAR data i.e., the Ordnance Survey Great Britain (OSGB) projection. Of the 17 GCPs measured at Dryden, only 14 were clearly visible in the mesh and point cloud. Seven GCPs were used to geo-reference the models and the remaining 7 were used for accuracy assessment (check points) [9]. For the Meshaw site, 5 GCPs were matched to corresponding reference points in the mesh and point cloud, while the remaining 6 were used as check points. Georeferencing was done in the CloudCompare v2.8 open-source software (Daniel Girardeau-Montaut, Paris, France) which uses an automatic registration procedure based on the iterative closest point algorithm (ICP) [54]. Horizontal accuracy of the geo-referenced point clouds was 1.77 m for Dryden and 2.53 m for Meshaw while vertical accuracy was 2.01 m for Dryden and 3.05 m for Meshaw (Table 2). The transformed points were later exported to the .laz file format for post-processing using LAStools [55].

2.6. Point Cloud Post-Processing

The geo-corrected point clouds were post-processed using different LAStools [55] algorithms in sequential steps to generate DSMs, DEMs and CHMs (Figure 4). Due to the relatively high density nature of SfM point clouds as compared to LiDAR data [14], processing the point cloud as a single file can be memory-demanding for LAStools algorithms [56]. Thus, it was necessary to first tile the SfM point clouds prior to any further post-processing steps to achieve more speed in the processing. After tiling, the next step was to identify ground points in the point cloud, to generate DEMs. Next, the height of each point in the cloud was determined by first generating a triangulated irregular network (TIN) surface from the ground points and then calculating point height from this surface. The remaining non-ground points were then classified into either ‘vegetation’, ‘buildings’ or ‘noise’ based on their heights above ground, ruggedness (for vegetation) and planarity (for buildings). DEMs for both sites were generated from ‘bare-earth’ points by filtering out all non-ground points and ‘thinning’ the remaining ground points. This was done by retaining the lowest point in each N × N grid (where ‘N’ is half the size of the intended DEM resolution). Thinning was done so as to ensure that any non-ground points incorrectly identified as ground points by the lasground algorithm would not be used in the DEM [57]. The same technique was done for the DSMs, this time retaining the highest point instead. For Meshaw, we used 1 m as the SfM DEM and DSM resolution in order to match the corresponding LiDAR DEM, while 50 cm resolution was used for Dryden.

3. Results and Discussion

3.1. Point Clouds: Sparse Canopy SfM/Lidar Comparison

A lot of gaps were observed in the SfM point clouds (Figure 5a) where the VisualSFM software could not match enough features from overlapping images (particularly in canopy-covered areas on tree tops, or areas of occlusions and dark shadows) to reconstruct a complete scene [48]. The SfM dataset had a superior point density (3.32 hits/m2) compared to LiDAR points (1 hit/m2) for the Meshaw site. For SfM this can be attributed to the PMVS2 algorithm which works to densify the initial sparse point cloud generated by the bundle adjustment procedure [53]. In canopy covered areas, the Meshaw SfM model had at least 2 points/m2, the reason being that in vegetated areas, there are a lot of features matched in the overlapping images resulting in the production of a denser point cloud [58]. Non-ground points were filtered out from both the LiDAR and SfM point clouds to allow a cloud-to-cloud (C2C) distance comparison of only the ground points [59]. The largest differences were observed around the site edges for both point clouds. This can be an effect of radial distortion in the camera lens which is directly proportional to the distance from the center of the image [45] (most notably so in the GoPro images), lack of GCPs near site edges (Figure 2b) or due to fewer overlapping images in these areas. The C2C maximum absolute difference for the ground-classified points was 12.99 m which is higher than that observed in the DEMs (10.8 m). A second C2C comparison of the points was also done on those points hitting canopy covered regions. In order to remove terrain effects, both SfM and LiDAR point clouds were first ‘normalized’ by assigning a z-coordinate of 0 m to all ground points such that the z-coordinate of any non-ground point (i.e., canopy points) becomes equal to its height above the ground [60]. This C2C comparison of canopy points reported a maximum absolute difference of 10.77 m.
After generating delineating boundaries for canopy covered areas from the ortho-photo map, we filtered ground points found below canopies for both SfM and LiDAR point clouds. For Meshaw, the point density dropped to 0.27 points/m2 for LiDAR and 1.56 points/m2 for SfM. However, a visual inspection (Figure 6) of the SfM ground points within the canopy boundaries shows that SfM points, although having a superior average point density, tended to be clustered and left a lot of gaps. LiDAR points, despite being sparsely populated, had a better spatial distribution which resulted in a better interpolation of the ground surface beneath the canopy. For Dryden, only a few ground points were generated below the canopy covered area and these were mainly located near the edges of the canopy boundary. The SfM point density for the Dryden site was 7.90 points/m2.

3.2. Digital Elevation Models

In the Meshaw DEMs (Figure 7), the low elevation belt stretching in the SW-NE direction (region A) appears significantly lower for LiDAR than for SfM. This may be due to better canopy penetration by LiDAR resulting in more ground points in canopy covered areas than with SfM as observed with the raw point clouds [18]. As observed from Figure 6b, SfM point clouds, although having a higher density, are not as spatially ‘complete’ as LiDAR point clouds. This is clearly evident due to the presence of triangulation artifacts that remain from TINing in the DEMs. Thus, the ground surface under the canopy on these gaps is actually interpolated from nearby ground points which are outside of the canopy areas, resulting in the surface being significantly higher than it should be. The same can also be seen in the Dryden DEM (Figure 8a) where the ground surface in the canopy areas is higher than it should be. The high elevation regions (e.g., box C) appear higher for LiDAR than for SfM. This can be explained by the fact that this region was ploughed after the LiDAR survey but just before the UAV survey. Statistics from a pixel-to-pixel comparison of the DEMs showed that the SfM DEM has a very strong correlation with the LiDAR DEM (R2 = 0.89). The Dryden DEM had RMSE = 2.31 m, a higher value than that observed in the raw point clouds, possibly due to interpolation effects.
Quantitative analysis of the DEMs was performed by subtracting the SfM DEM from the LiDAR DEM to obtain a DEM Difference map (Figure 9a). Maximum absolute differences were mainly located near the DEM edges as in the raw point clouds, which can be explained by the fact that triangulation networks are incomplete near the raster edges, resulting in edge errors [57]. However, some of the large errors were also found in canopy areas. This is as expected since there is no actual form of canopy penetration with SfM as there is with LiDAR, and as shown in Figure 6b the SfM ground points have a very poor spatial distribution within canopies.

3.3. Canopy Height Models

At Meshaw, canopy height values ranged from −7.75 m to 23.11 m for SfM and from −0.36 m to 25.28 m for LiDAR. However, in the GIS analysis, only regions in the scale range 0 to 23 m in both models were compared directly. For Dryden, the CHM range was −8.03 m to 32.67 m. A lot of the TIN artefacts can still be seen in Meshaw’s SfM CHM, especially in the yellow boxes (Figure 10). This is again a result of the gaps in the SfM point cloud. Box D (Figure 10) also shows how SfM seems to have missed a lot of small trees. This is as a result of too few feature matches identified by SfM since the trees have smaller crowns [58]. Overall, most of the crowns appear to be wider for SfM than for LiDAR.
Statistical results from the pixel-wise comparison of the Meshaw CHMs reveal a strong correlation (R2 = 0.75) between the 2 models implying that only 25% of the variation in the LiDAR CHM could not be explained by SfM. The CHM difference map (Figure 9b) shows that most of the canopy heights from the 2 models are very similar, with an average difference of −0.03 m and standard deviation 2.38 m. LiDAR canopies appear to be higher than SfM canopies mostly in the regions where SfM could not find enough feature matches to create points (i.e., the canopy gaps identified in Figure 6b). Another CHM (hereafter referred to as ‘Hybrid CHM’) was also generated by subtracting the LiDAR DEM from the SfM DSM [18]. A weak positive correlation (0.19) was observed between ground measured heights and SfM heights for the Dryden site. More than 50% of SfM heights were lower than the field measured heights. This was as a result of the lack of canopy penetration by SfM resulting in a DEM with a higher ground surface, which when subtracted from the DSM, produced very low canopy height values (minimum −8.03 m) in the CHM.
More detailed examination of the CHMs was done by extracting canopy profiles along different transects (Figure 9b). For transect 2 (Figure 11a,b) and transect 4 (Figure 11c,d) both SfM and Hybrid heights showed a very strong positive correlation with LiDAR heights. In both transects, the Hybrid CHM constantly overestimated the LiDAR heights. However, for transect 5 (Figure 12a,b) and transect 7 (Figure 12c,d) both SfM and Hybrid heights had a weaker correlation with LiDAR heights. The Hybrid CHM made very little improvement to the tree heights: for example, transect 5, where a weak negative correlation with LiDAR (R2 of −0.24) was even weaker (R2 of 0.06).
In all transects, it can be seen that in terms of positional accuracy, SfM performs just as well as LiDAR. Positions of local maxima and minima points [61] appear to correspond in both models, at least for transects 2, 4 and 7. This shows how segmenting individual trees can be done with SfM clouds in the same way it is done with LiDAR clouds. However, a lot of discontinuities that can be seen in the LiDAR profiles are clearly not present in the SfM profiles (e.g., Figure 12a). These ‘spikes’, which correspond to ‘pit’ cells in the LiDAR CHM are cells where no LiDAR returns were recorded, or could just be noise. Thus, the SfM CHM appears to be smoother that the LiDAR one. The spikes also help to explain most of the outliers that can be observed on the regression graphs where SfM heights are significantly higher than LiDAR heights.

4. Conclusions

4.1. Challenges with SfM

4.1.1. Accuracy

When compared to LiDAR, SfM performed well in some areas (Figure 11) while it performed poorly in others (Figure 12). Those areas of poor performance can be attributed to VisualSFM not detecting enough feature matches in the images due to poor image coverage. This was particularly apparent at the Dryden site where a coverage of 80% generated a more spatially complete point cloud. At Dryden, the poor performance is attributed to closed canopy, which prohibited the generation of below-canopy ground points. However, different methods of generating CHMs might also have improved the results. The authors in [60] present a different method of generating CHMs from circular points of radius equal to the LiDAR beam size instead of using dimensionless points. The same approach could be done with SfM CHMs.

4.1.2. Canopy Penetration

Problems of canopy penetration in SfM point clouds observed in other studies [18,20] were also observed in this study (Figure 8a). LiDAR also has the same issues, but in this study performed better than SfM. This renders SfM only practicable in areas where the crown cover is not more that 50%. Considering that more than half of the tree covered areas in the world have less than 50% canopy cover [6], it means that there are many places where SfM can be used successfully. While its accuracy is low in areas with >50% canopy cover, acquiring images during ‘leaf-off’ seasons can actually improve SfM canopy penetration ability [62]. Even with closed canopies, it can still be useful for other REDD-related activities in the same areas, e.g., monitoring drivers of forest change or forest mapping [63].

4.1.3. Data-Richness of Point Clouds

Although SfM point clouds are a lot like LiDAR point clouds, there are a number of differences which restrict the type of operations and analyses that can be done on them. In particular, LiDAR will record a number of returns within the same column of data, which allows for further detailed analysis to be done (e.g., creating DSMs from first returns only, or investigating the distribution of returns) that are not possible with SfM. However, even with this data richness, the only biomass relevant metric that can be obtained from LiDAR is tree height, which, as demonstrated in this study, is also possible with SfM. What SfM point clouds possess, which is currently lacking in LiDAR (although may ultimately be addressed through multispectral LiDAR), are RGB values that allow them to be rendered in full colour. This actually makes feature interpretation in SfM point clouds easier than with discrete return LiDAR.

4.2. Small UAVs for Forestry Applications

4.2.1. Ease of Use of UAVs: Missions Flying and Post-Processing

Operating a small UAV for forestry activities involves at the very least mission planning, component setup, flying and downloading data. These steps are quite straightforward and do not require much expertise. Modern UAVs have mission planning software, allowing for either full autonomous missions meaning that active user engagement is kept to a minimum thereby minimising human errors. Manual operation may only be necessary during landing or when trying to avoid unforeseen collisions [64]. SfM gives an alternative approach to geo-referencing of point clouds using camera locations (obtained from the flight logs) [48] instead of field-measured GCPs. Although in this study the alternative approach produced poor results (RMSE > 10 m), it is important to note its potential for improvement in the near-future. The fast learning curve associated with drone use also has a direct implication on their suitability for forest monitoring. The commercial drone market has made enormous strides in bridging the knowledge gap between experienced drone users and novices by continuing to invest in fully autonomous drone operation. Training in equipment setup, mission planning and drone operation typically varies from 1–5 days for trainees with previous computer experience to about 14 days for those without any previous computer experience [63]. This implies that small UAVs could be adopted in, for example, Community Based Forest Monitoring programs where there might not exist any experienced users. The SfM/MVS workflow can be easily automated as demonstrated in this study, using software that is open source, or freely available. The open source software used in this study allow the user to pre-define calculation parameters in batch scripts and these can run easily in sequence. The only step that requires user interaction is the geo-referencing of models, but as pointed out earlier, this is entirely optional and can be easily automated if using flight logs. This study also demonstrated how post-processing of SfM data can also be accomplished using the same tools used for LiDAR data, in this case LAStools and Cloud Compare. The same tools can also be easily automated by defining processing parameters in batch scripts. However, these analysis tools do require a certain level of expertise for one to use them.

4.2.2. The Cost of Using Small-UAVs

Each remote sensing method requires different ways for data acquisition and processing, as well as training and capacity development [2]. Data acquisition costs depend on the project requirements while training costs depend on factors such as the degree of automation that can be achieved in data acquisition and processing. Due to power and payload limitations, UAVs are a practical solution only for small areas e.g., project level in the context of REDD [63]. Since the scale and scope of the monitoring has a direct influence on the data-related costs incurred, this implies that at project level UAVs are more cost-effective that satellite or airborne EO methods [2]. In adopting a UAV approach, one must be mindful that the user oversees the entire image processing chain from collection to processing to product, and this carries some additional costs over traditional remote sensing approaches. However, forest monitoring using the project approach greatly simplifies monitoring and evaluation of forest carbon stocks because project boundaries are clearly defined and stratification of the project area can easily be done at this scale [65]. Generally, the total cost of purchasing, operating and maintaining a UAV is relatively lower than that of commissioning piloted aircraft missions or acquiring high resolution satellite imagery on a regular basis. A lot of off-the-shelf UAVs already exist for use in forest monitoring activities with prices ranging from US$4000 for professional drones and from as little as US$600 for hobbyist drones that can now be readily reconfigured for aerial surveying, as has been achieved by the ‘conservation drone’ network [23]. This study also demonstrated how free open source SfM/MVS software can be used seamlessly in a workflow to generate point clouds. With increasing technical capabilities, the demand for UAVs is expected to increase while their price diminishes. Open source SfM software can also be expected to improve in efficiency in the next few years.
This study has demonstrated the utility of SfM from UAVs in generating high density point clouds and its potential of to be a low cost remote sensing method for REDD monitoring for developing countries. The key advantage of this approach is that data collection can be in the hands of local stakeholders. With low-cost hardware, open-source software, and with very low barriers to operation, this can become an operational tool for local agencies and organisations, where the power of the data lies within local hands, rather than external airborne operators or space agencies.
Although there are still a number of challenges with this solution, there are also strengths which can be useful for developing nations (Table 3). With continued improvements in the software and sensors, SfM from UAVs can become a real contender to airborne LiDAR for forestry applications in the near future.

Acknowledgments

Leon DeBell flew the QPOD Quest UAV at Meshaw as part of flight testing for the QuestEarthWater project, which was funded by the UK Technology Strategy Board and NERC, and we are also grateful for the field assistance of Naomi Gatis and David Luscombe at this site. The NERC Tellus SouthWest project is acknowledged for providing the LiDAR data used at Meshaw. The authors would also like to thank Mark Buie, Bruce Gittings and Alasdair Mac Arthur for helping out with the UAV fieldwork at Dryden Farm.

Author Contributions

France Gerard provided LiDAR data, Karen Anderson provided UAV imagery for the Meshaw site, Reason Mlambo performed the analysis and Iain H Woodhouse supervised the analysis. All authors contributed to the write-up of the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Brack, D.; Bailey, R. Ending Global Deforestation: Policy Options for Consumer Countries; London Chatham House: London, UK, 2013. [Google Scholar]
  2. Böttcher, H.; Eisbrenner, K.; Fritz, S.; Kindermann, G.; Kraxner, F.; McCallum, I.; Obersteiner, M. An Assessment of Monitoring Requirements and Costs of “Reduced Emissions from Deforestation and Degradation”. Carbon Balance Manag. 2009, 4, 7. [Google Scholar] [CrossRef] [PubMed]
  3. Gibbs, H.K.; Brown, S.; Niles, J.O.; Foley, J.A. Monitoring and Estimating Tropical Forest Carbon Stocks: Making REDD a Reality. Environ. Res. Lett. 2007, 2, 45023. [Google Scholar] [CrossRef]
  4. Herold, M.; Johns, T. Linking Requirements with Capabilities for Deforestation Monitoring in the Context of the UNFCCC-REDD Process. Environ. Res. Lett. 2007, 2, 45025. [Google Scholar] [CrossRef]
  5. United Nations. UN-REDD Programme Strategy 2011–2015; United Nations: New York, NY, USA, 2011. [Google Scholar]
  6. Hansen, M.C.; Potapov, P.V.; Moore, R.; Hancher, M.; Turubanova, S.A.; Tyukavina, A.; Thau, D.; Stehman, S.V.; Goetz, S.J.; Loveland, T.R.; et al. High-Resolution Global Maps of 21st-Century Forest Cover Change (15 November): 850–853. Science 2013, 342, 850–853. [Google Scholar] [CrossRef] [PubMed]
  7. Mitchard, E.T.A.; Saatchi, S.S.; Lewis, S.L.; Feldpausch, T.R.; Gerard, F.F.; Woodhouse, I.H.; Meir, P.; Woodley, E.; Meir, P. Comment on ‘A First Map of Tropical Africa’s above-Ground Biomass Derived from Satellite Imagery’. Environ. Res. Lett. 2011, 6, 49001. [Google Scholar] [CrossRef]
  8. Snavely, N.; Seitz, S.M.; Szeliski, R. Modeling the World from Internet Photo Collections. Int. J. Comput. Vis. 2008, 80, 189–210. [Google Scholar] [CrossRef]
  9. Fonstad, M.A.; Dietrich, J.T.; Courville, B.C.; Jensen, J.L.; Carbonneau, P.E. Topographic Structure from Motion: A New Development in Photogrammetric Measurement. Earth Surf. Process. Landf. 2013, 38, 421–430. [Google Scholar] [CrossRef]
  10. Javernick, L.; Brasington, J.; Caruso, B. Modeling the Topography of Shallow Braided Rivers Using Structure-from-Motion Photogrammetry. Geomorphology 2014, 213, 166–182. [Google Scholar] [CrossRef]
  11. Mancini, F.; Dubbini, M.; Gattelli, M.; Stecchi, F.; Fabbri, S.; Gabbianelli, G. Using Unmanned Aerial Vehicles (UAV) for High-Resolution Reconstruction of Topography: The Structure from Motion Approach on Coastal Environments. Remote Sens. 2013, 5, 6880–6898. [Google Scholar] [CrossRef] [Green Version]
  12. Baltsavias, E.; Gruen, A.; Eisenbeiss, H.; Zhang, L.; Waser, L.T. High Quality Image Matching and Automated Generation of 3D Tree Models. Int. J. Remote Sens. 2008, 29, 1243–1259. [Google Scholar] [CrossRef]
  13. White, J.C.; Wulder, M.A.; Vastaranta, M.; Coops, N.C.; Pitt, D.; Woods, M. The Utility of Image-Based Point Clouds for Forest Inventory: A Comparison with Airborne Laser Scanning. Forests 2013, 4, 518–536. [Google Scholar] [CrossRef]
  14. Leberl, F.; Irschara, A.; Pock, T.; Meixner, P.; Gruber, M.; Scholz, S.; Wiechert, A. Point Clouds: Lidar versus 3D Vision. Photogramm. Eng. Remote Sens. 2010, 76, 1123–1134. [Google Scholar] [CrossRef]
  15. St-Onge, B.; Jumelet, J.; Cobello, M.; Véga, C. Measuring Individual Tree Height Using a Combination of Stereophotogrammetry and Lidar. Can. J. For. Res. 2004, 34, 2122–2130. [Google Scholar] [CrossRef]
  16. Cunliffe, A.M.; Brazier, R.E.; Anderson, K. Ultra-Fine Grain Landscape-Scale Quantification of Dryland Vegetation Structure with Drone-Acquired Structure-from-Motion Photogrammetry. Remote Sens. Environ. 2016, 183, 129–143. [Google Scholar] [CrossRef] [Green Version]
  17. Messinger, M.; Asner, G.; Silman, M. Rapid Assessments of Amazon Forest Structure and Biomass Using Small Unmanned Aerial Systems. Remote Sens. 2016, 8, 615. [Google Scholar] [CrossRef]
  18. Dandois, J.P.; Ellis, E.C. Remote Sensing of Vegetation Structure Using Computer Vision. Remote Sens. 2010, 2, 1157–1176. [Google Scholar] [CrossRef]
  19. Tao, W.; Lei, Y.; Mooney, P. Dense Point Cloud Extraction from UAV Captured Images in Forest Area. In Proceedings of the 2011 IEEE International Conference on Spatial Data Mining and Geographical Knowledge Services (ICSDM 2011), Fuzhou, China, 29 June–1 July 2011; pp. 389–392.
  20. Lisein, J.; Pierrot-Deseilligny, M.; Bonnet, S.; Lejeune, P. A Photogrammetric Workflow for the Creation of a Forest Canopy Height Model from Small Unmanned Aerial System Imagery. Forests 2013, 4, 922–944. [Google Scholar] [CrossRef]
  21. Westoby, M.J.; Brasington, J.; Glasser, N.F.; Hambrey, M.J.; Reynolds, J.M. “Structure-from-Motion” Photogrammetry: A Low-Cost, Effective Tool for Geoscience Applications. Geomorphology 2012, 179, 300–314. [Google Scholar] [CrossRef] [Green Version]
  22. Watts, A.C.; Ambrosia, V.G.; Hinkley, E.A. Unmanned Aircraft Systems in Remote Sensing and Scientific Research: Classification and Considerations of Use. Remote Sens. 2012, 4, 1671–1692. [Google Scholar] [CrossRef]
  23. Koh, L.P.; Wich, S.A. Dawn of Drone Ecology: Low-Cost Autonomous Aerial Vehicles for Conservation. Trop. Conserv. Sci. 2012, 5, 121–132. [Google Scholar] [CrossRef] [Green Version]
  24. Civil Aviation Authority. CAP 772: Unmanned Aircraft System Operations in UK Airspace—Guidance; Civil Aviation Authority: Norwich, UK, 2012. [Google Scholar]
  25. DJI Phantom 2 Vision. Available online: http://quadcopterdump.com/wp-content/uploads/2015/06/phantom-2-vision.jpg (accessed on 21 December 2016).
  26. Helimetrex and QuestUAV Team up. Available online: https://www.suasnews.com/wp-content/uploads/2014/04/qpod-1024x682.jpg (accessed on 21 December 2016).
  27. Goetz, S.J.; Dubayah, R. Advances in Remote Sensing Technology and Implications for Measuring and Monitoring Forest Carbon Stocks and Change. Carbon Manag. 2011, 2, 231–244. [Google Scholar] [CrossRef]
  28. Lloyd, C.R.; Rebelo, L.-M.; Max Finlayson, C. Providing Low-Budget Estimations of Carbon Sequestration and Greenhouse Gas Emissions in Agricultural Wetlands. Environ. Res. Lett. 2013, 8, 15010–15013. [Google Scholar] [CrossRef]
  29. Cho, M.A.; Skidmore, A.; Corsi, F.; van Wieren, S.E.; Sobhan, I. Estimation of Green Grass/herb Biomass from Airborne Hyperspectral Imagery Using Spectral Indices and Partial Least Squares Regression. Int. J. Appl. Earth Obs. Geoinf. 2007, 9, 414–424. [Google Scholar] [CrossRef]
  30. Thenkabail, P.S.; Enclona, E.A.; Ashton, M.S.; Legg, C.; de Dieu, M.J. Hyperion, IKONOS, ALI, and ETM+ Sensors in the Study of African Rainforests. Remote Sens. Environ. 2004, 90, 23–43. [Google Scholar] [CrossRef]
  31. Koch, B. Status and Future of Laser Scanning, Synthetic Aperture Radar and Hyperspectral Remote Sensing Data for Forest Biomass Assessment. ISPRS J. Photogramm. Remote Sens. 2010, 65, 581–590. [Google Scholar] [CrossRef]
  32. Gerstl, S.A.W. Physics Concepts of Optical and Radar Reflectance Signatures A Summary Review. Int. J. Remote Sens. 1990, 11, 1109–1117. [Google Scholar] [CrossRef]
  33. Harrell, P.A.; Bourgeau-Chavez, L.L.; Kasischke, E.S.; French, N.H.F.; Christensen, N.L. Sensitivity of ERS-1 and JERS-1 Radar Data to Biomass and Stand Structure in Alaskan Boreal Forest. Remote Sens. Environ. 1995, 54, 247–260. [Google Scholar] [CrossRef]
  34. Craig Dobson, M.; Ulaby, F.T.; Pierce, L.E. Land-Cover Classification and Estimation of Terrain Attributes Using Synthetic Aperture Radar. Remote Sens. Environ. 1995, 51, 199–214. [Google Scholar] [CrossRef]
  35. Freeman, A.; Durden, S.L. A Three-Component Scattering Model for Polarimetric SAR Data. IEEE Trans. Geosci. Remote Sens. 1998, 36, 963–973. [Google Scholar] [CrossRef]
  36. Lu, D. The Potential and Challenge of Remote Sensing-based Biomass Estimation. Int. J. Remote Sens. 2006, 27, 1297–1328. [Google Scholar] [CrossRef]
  37. Li, W.; Guo, Q.; Jakubowski, M.K.; Kelly, M. A New Method for Segmenting Individual Trees from the Lidar Point Cloud. Photogramm. Eng. Remote Sens. 2012, 78, 75–84. [Google Scholar] [CrossRef]
  38. Zhao, K.; Popescu, S.; Nelson, R. Lidar Remote Sensing of Forest Biomass: A Scale-Invariant Estimation Approach Using Airborne Lasers. Remote Sens. Environ. 2009, 113, 182–196. [Google Scholar] [CrossRef]
  39. Lefsky, M.A.; Cohen, W.B.; Parker, G.G.; Harding, D.J. Lidar Remote Sensing for Ecosystem Studies. Bioscience 2002, 52, 19–30. [Google Scholar] [CrossRef]
  40. Hummel, S.; Hudak, A.T.; Uebler, E.H.; Falkowski, M.J.; Megown, K.A. A Comparison of Accuracy and Cost of LiDAR versus Stand Exam Data for Landscape Management on the Malheur National Forest. J. For. 2011, 109, 267–273. [Google Scholar]
  41. Takasu, T. RTKLIB: An Open Source Program Package for GNSS Positioning. Available online: http://www.rtklib.com/ (accessed on 15 July 2015).
  42. The Tellus South West Project. Available online: http://www.tellusgb.ac.uk/ (accessed 10 June 2015).
  43. Ferraccioli, F.; Gerard, F.; Robinson, C.; Jordan, T.; Biszczuk, M.; Ireland, L.; Beasley, M.; Vidamour, A.; Barker, A.; Arnold, R.; et al. LiDAR Based Digital Terrain Model (DTM) Data for South West England. Available online: https://doi.org/10.5285/e2a742df-3772-481a-97d6-0de5133f4812 (accessed on 10 June 2015).
  44. Ferraccioli, F.; Gerard, F.; Robinson, C.; Jordan, T.; Biszczuk, M.; Ireland, L.; Beasley, M.; Vidamour, A.; Barker, A.; Arnold, R.; et al. LiDAR based Digital Surface Model (DSM) data for South West England. Available online: https://doi.org/10.5285/b81071f2-85b3-4e31-8506-cabe899f989a (accessed on 10 June 2015).
  45. James, M.R.; Robson, S. Mitigating Systematic Error in Topographic Models Derived from UAV and Ground-Based Image Networks. Earth Surf. Process. Landf. 2014, 39, 1413–1420. [Google Scholar] [CrossRef]
  46. Paull, L.; Thibault, C.; Nagaty, A.; Seto, M.; Li, H. Sensor-Driven Area Coverage for an Autonomous Fixed-Wing Unmanned Aerial Vehicle. IEEE Trans. Cybern. 2014, 44, 1605–1618. [Google Scholar] [CrossRef] [PubMed]
  47. DJI. Phantom 2 User Manual v1.2. Available online: http://download.dji-innovations.com/downloads/phantom_2/en/PHANTOM2_User_Manual_v1.2_en.pdf (accessed on 1 July 2015).
  48. Wu, C. Towards Linear-Time Incremental Structure from Motion. In Proceedings of the 2013 International Conference on 3D Vision, 3DV 2013, Seattle, WA, USA, 29 June–1 July 2013; pp. 127–134.
  49. Jancosek, M.; Pajdla, T. Multi-View Reconstruction Preserving Weakly-Supported Surfaces. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, USA, 20–25 June 2011; pp. 3121–3128.
  50. Wu, C. VisualSFM: A Visual Structure from Motion System. Available online: http://ccwu.me/vsfm/ (accessed on 15 July 2015).
  51. Lowe, D.G. Object Recognition from Local Scale-Invariant Features. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; Volume 2, pp. 1150–1157.
  52. Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  53. Furukawa, Y.; Ponce, J. Accurate, Dense, and Robust Multiview Stereopsis. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 1362–1376. [Google Scholar] [CrossRef] [PubMed]
  54. Besl, P.J.; McKay, N.D. Method for Registration of 3-D Shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef]
  55. Rapidlasso. LAStools: Converting, Filtering, Viewing, Processing and Compressing LiDAR Data in LAS Format. Available online: http://www.cs.unc.edu/~isenburg/lastools/ (accessed on 4 August 2015).
  56. Isenburg, M.; Liu, Y.; Shewchuk, J.; Snoeyink, J. Streaming Computation of Delaunay Triangulations. ACM Siggraph 2006, 25, 1049–1056. [Google Scholar] [CrossRef]
  57. Isenburg, M.; Liu, Y.; Shewchuk, J.; Snoeyink, J.; Thirion, T. Generating Raster DEM from Mass Points via TIN Streaming. Geogr. Inf. Sci. 2006, 4197, 186–198. [Google Scholar]
  58. Lingua, A.; Marenchino, D.; Nex, F. Performance Analysis of the SIFT Operator for Automatic Feature Extraction and Matching in Photogrammetric Applications. Sensors 2009, 9, 3745–3766. [Google Scholar] [CrossRef] [PubMed]
  59. Girardeau-Montaut, D.; Roux, M.; Marc, R.; Thibault, G. Change Detection on Points Cloud Data Acquired with a Ground Laser Scanner. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2005, 36, W19. [Google Scholar]
  60. Khosravipour, A.; Skidmore, A.K.; Isenburg, M.; Wang, T.; Hussin, Y.A. Generating Pit-Free Canopy Height Models from Airborne Lidar. Photogramm. Eng. Remote Sens. 2014, 80, 863–872. [Google Scholar] [CrossRef]
  61. Jakubowski, M.K.; Guo, Q.; Kelly, M. Tradeoffs between Lidar Pulse Density and Forest Measurement Accuracy. Remote Sens. Environ. 2013, 130, 245–253. [Google Scholar] [CrossRef]
  62. Puttock, A.K.; Cunliffe, A.M.; Anderson, K.; Brazier, R.E. Aerial Photography Collected with a Multirotor Drone Reveals Impact of Eurasian Beaver Reintroduction on Ecosystem Structure 1. J. Unmanned Veh. Syst. 2015, 3, 123–130. [Google Scholar] [CrossRef]
  63. Paneque-Gálvez, J.; McCall, M.K.; Napoletano, B.M.; Wich, S.A.; Koh, L.P. Small Drones for Community-Based Forest Monitoring: An Assessment of Their Feasibility and Potential in Tropical Areas. Forests 2014, 5, 1481–1507. [Google Scholar] [CrossRef]
  64. Eisenbeiß, H. UAV Photogrammetry; ETH Zurich: Zurich, Switzerland, 2009. [Google Scholar]
  65. Estrada Porrúra, M.; Corbera, E.; Brown, K. Reducing Greenhouse Gas Emissions from Deforestation in Developing Countries: Revisiting the Assumptions. Clim. Chang. 2007, 100, 355–388. [Google Scholar]
Figure 1. DJI Phantom 2 quadcopter UAV (left) and a Quest Q-Pod fixed-wing UAV (right). UAVs of this size with <20 kg mass and <4 m wingspan are classified as ‘small’ UAVs [24]. (Photographs adapted from [25,26]).
Figure 1. DJI Phantom 2 quadcopter UAV (left) and a Quest Q-Pod fixed-wing UAV (right). UAVs of this size with <20 kg mass and <4 m wingspan are classified as ‘small’ UAVs [24]. (Photographs adapted from [25,26]).
Forests 08 00068 g001
Figure 2. Test sites (a) Dryden and (b) Meshaw in the United Kingdom (left). For Meshaw calibration GCPs were obtained from a LiDAR DEM and at Dryden ground-based GPS was used.
Figure 2. Test sites (a) Dryden and (b) Meshaw in the United Kingdom (left). For Meshaw calibration GCPs were obtained from a LiDAR DEM and at Dryden ground-based GPS was used.
Forests 08 00068 g002
Figure 3. SfM and MVS workflow in VisualSFM, CMP-MVS and CloudCompare. Four different algorithms are executed in VisualSFM to generate a dense point cloud.
Figure 3. SfM and MVS workflow in VisualSFM, CMP-MVS and CloudCompare. Four different algorithms are executed in VisualSFM to generate a dense point cloud.
Forests 08 00068 g003
Figure 4. Summary of point cloud post-processing using LAStools. The commands were executed in a single batch script.
Figure 4. Summary of point cloud post-processing using LAStools. The commands were executed in a single batch script.
Forests 08 00068 g004
Figure 5. Showing (a) Classified SfM point cloud from and (b) Ortho-photograph at the Meshaw site. Ground points are shown in brown while canopy points are shown in green.
Figure 5. Showing (a) Classified SfM point cloud from and (b) Ortho-photograph at the Meshaw site. Ground points are shown in brown while canopy points are shown in green.
Forests 08 00068 g005
Figure 6. Showing (a) LiDAR ground points and (b) SfM ground points at the Meshaw site. Point density in canopy areas was 0.27 points/m2 for LiDAR and 1.56 points/m2 for SfM.
Figure 6. Showing (a) LiDAR ground points and (b) SfM ground points at the Meshaw site. Point density in canopy areas was 0.27 points/m2 for LiDAR and 1.56 points/m2 for SfM.
Forests 08 00068 g006
Figure 7. Showing (a) LiDAR DEM and (b) SfM DEM generated from the respective ground classified points at the Meshaw site. Boxes A and B show areas where the LiDAR had lower elevation than SfM, while in box C, LiDAR had higher elevation than SfM.
Figure 7. Showing (a) LiDAR DEM and (b) SfM DEM generated from the respective ground classified points at the Meshaw site. Boxes A and B show areas where the LiDAR had lower elevation than SfM, while in box C, LiDAR had higher elevation than SfM.
Forests 08 00068 g007
Figure 8. Showing (a) DEM and (b) CHM for the Dryden site generated by SfM. In (a) the ground elevation in canopy regions is significantly higher than that in bare regions due to lack of canopy penetration by SfM.
Figure 8. Showing (a) DEM and (b) CHM for the Dryden site generated by SfM. In (a) the ground elevation in canopy regions is significantly higher than that in bare regions due to lack of canopy penetration by SfM.
Forests 08 00068 g008
Figure 9. Showing for the Meshaw site; (a) DEM difference map and (b) CHM difference map generated by subtracting the SfM DEM from the LiDAR DEM and the SfM CHM from the LiDAR CHM, respectively. Canopy height profiles were extracted from the 7 transects shown in red.
Figure 9. Showing for the Meshaw site; (a) DEM difference map and (b) CHM difference map generated by subtracting the SfM DEM from the LiDAR DEM and the SfM CHM from the LiDAR CHM, respectively. Canopy height profiles were extracted from the 7 transects shown in red.
Forests 08 00068 g009
Figure 10. Showing (a) LiDAR CHM and (b) SfM CHM for the Meshaw site. The Red box D shows how SfM has missed a number of small trees which are present in the area, while the yellow boxes E, F and G show the presence of TINing artefacts.
Figure 10. Showing (a) LiDAR CHM and (b) SfM CHM for the Meshaw site. The Red box D shows how SfM has missed a number of small trees which are present in the area, while the yellow boxes E, F and G show the presence of TINing artefacts.
Forests 08 00068 g010
Figure 11. Showing for the Meshaw site; CHM profiles for (a) and (b) transect 2 and (c) and (d) transect 4, and corresponding regression plots (e) to (h). Both SfM and Hybrid CHMs showed strong positive correlation to LiDAR for both transects. However, the Hybrid heights constantly over-estimated the tree heights in both cases.
Figure 11. Showing for the Meshaw site; CHM profiles for (a) and (b) transect 2 and (c) and (d) transect 4, and corresponding regression plots (e) to (h). Both SfM and Hybrid CHMs showed strong positive correlation to LiDAR for both transects. However, the Hybrid heights constantly over-estimated the tree heights in both cases.
Forests 08 00068 g011aForests 08 00068 g011b
Figure 12. Showing for the Meshaw site; CHM transect 5 profiles for (a) SfM vs. LiDAR (b) Hybrid vs. LiDAR, and transect 7 profiles for (c) SfM vs. LiDAR (d) Hybrid vs. LiDAR, and corresponding regression plots (e) to (h). Correlation between the LiDAR CHMs and both the SfM and Hybrid CHMs was very weak. The Hybrid CHMs showed slightly improved correlation with LiDAR.
Figure 12. Showing for the Meshaw site; CHM transect 5 profiles for (a) SfM vs. LiDAR (b) Hybrid vs. LiDAR, and transect 7 profiles for (c) SfM vs. LiDAR (d) Hybrid vs. LiDAR, and corresponding regression plots (e) to (h). Correlation between the LiDAR CHMs and both the SfM and Hybrid CHMs was very weak. The Hybrid CHMs showed slightly improved correlation with LiDAR.
Forests 08 00068 g012aForests 08 00068 g012b
Table 1. Summary of UAV missions at the 2 sites. Both missions were fully autonomous missions where the UAV followed a way-pointed flight route over the field site.
Table 1. Summary of UAV missions at the 2 sites. Both missions were fully autonomous missions where the UAV followed a way-pointed flight route over the field site.
SiteUAVCameraFlying Height (m)Flying Speed [m/s]Ground Sampling Distance (cm)No. of Photos
MeshawQuest QPodSony NEX-710014 (average)1.66111
DrydenDJI PhantomGoPro Hero 3+100 and 5053.57999
Table 2. Positional errors (RMSE) in SfM point clouds. GCPs for Dryden were measured using GNSS while those for Meshaw were extracted from the LiDAR DSM.
Table 2. Positional errors (RMSE) in SfM point clouds. GCPs for Dryden were measured using GNSS while those for Meshaw were extracted from the LiDAR DSM.
SiteValidation GCPsCheck PointsRMSE (m)
HorizontalVertical
Meshaw562.533.05
Dryden771.772.01
Table 3. A summary of strengths and weaknesses of SfM from UAV against defined criteria.
Table 3. A summary of strengths and weaknesses of SfM from UAV against defined criteria.
CriterionStrengthWeakness
AccuracyPerforms well over bare ground.Performs poorly with poor image coverage.
CostCost-effective for small areas. Cheap hobbyist UAVs available (e.g., the one used in [16]). Open source SfM/MVS software available.Open source might not be as accurate as commercial software. Cheap camera models (e.g., GoPro) introduce large distortions in SfM models.
Ease of use/Learning curveFull autonomous missions. Automated data processing.Post-processing still requires experienced users.
Amount of dataHigh density point clouds. Easy interpretation of point cloud because of true colour rendering.Classification of points based only on point height (no return number).

Share and Cite

MDPI and ACS Style

Mlambo, R.; Woodhouse, I.H.; Gerard, F.; Anderson, K. Structure from Motion (SfM) Photogrammetry with Drone Data: A Low Cost Method for Monitoring Greenhouse Gas Emissions from Forests in Developing Countries. Forests 2017, 8, 68. https://doi.org/10.3390/f8030068

AMA Style

Mlambo R, Woodhouse IH, Gerard F, Anderson K. Structure from Motion (SfM) Photogrammetry with Drone Data: A Low Cost Method for Monitoring Greenhouse Gas Emissions from Forests in Developing Countries. Forests. 2017; 8(3):68. https://doi.org/10.3390/f8030068

Chicago/Turabian Style

Mlambo, Reason, Iain H. Woodhouse, France Gerard, and Karen Anderson. 2017. "Structure from Motion (SfM) Photogrammetry with Drone Data: A Low Cost Method for Monitoring Greenhouse Gas Emissions from Forests in Developing Countries" Forests 8, no. 3: 68. https://doi.org/10.3390/f8030068

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop