Next Article in Journal
Optical Temperature Control Unit and Convolutional Neural Network for Colorimetric Detection of Loop-Mediated Isothermal Amplification on a Lab-On-A-Disc Platform
Previous Article in Journal
Adaptive Noise Reduction for Sound Event Detection Using Subband-Weighted NMF
Previous Article in Special Issue
Detailed Urban Land Use Land Cover Classification at the Metropolitan Scale Using a Three-Layer Classification Scheme
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

DEM Generation from Fixed-Wing UAV Imaging and LiDAR-Derived Ground Control Points for Flood Estimations

by
Jairo R. Escobar Villanueva
1,2,*,
Luis Iglesias Martínez
2 and
Jhonny I. Pérez Montiel
1
1
Grupo de Investigación GISA, Facultad de Ingeniería, Universidad de La Guajira, Km 5 Vía a Maicao, Riohacha 440007, Colombia
2
Escuela Técnica Superior de Ingenieros de Minas y Energía, Universidad Politécnica de Madrid. Departamento de Explotación de Recursos Minerales y Obras Subterráneas. C/ Ríos Rosas, 21, 28003 Madrid, Spain
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(14), 3205; https://doi.org/10.3390/s19143205
Submission received: 15 June 2019 / Revised: 16 July 2019 / Accepted: 17 July 2019 / Published: 20 July 2019
(This article belongs to the Special Issue Urban Remote Sensing and Sustainable Development)

Abstract

:
Geospatial products, such as digital elevation models (DEMs), are important topographic tools for tackling local flood studies. This study investigates the contribution of LiDAR elevation data in DEM generation based on fixed-wing unmanned aerial vehicle (UAV) imaging for flood applications. More specifically, it assesses the accuracy of UAV-derived DEMs using the proposed LiDAR-derived control point (LCP) method in a Structure-from-Motion photogrammetry processing. Also, the flood estimates (volume and area) of the UAV terrain products are compared with a LiDAR-based reference. The applied LCP-georeferencing method achieves an accuracy comparable with other studies. In addition, it has the advantage of using semi-automatic terrain data classification and is readily applicable in flood studies. Lastly, it proves the complementarity between LiDAR and UAV photogrammetry at the local level.

1. Introduction

Projected increases of heavy rainfall, based on climate models, are expected to aggravate local floods [1]. Thus, effective spatial tools are required by governments and societies to take action against increasing exposure to natural hazards [2]. Geospatial products, such as digital elevation models (DEMs) are useful topographic representations of space and have some specifics for flood studies [3,4]. In a further explanation, the DEM concept has been considered in the same way proposed by Polat et al. [5], which refers to DEM as the Z-dimension of the terrain digitally. There is also the digital surface model (DSM) that include natural and man-made objects. Highly detailed terrain modelling is usually produced from data obtained by active sensors such as airborne light detection and ranging (LiDAR) [6,7]. The bare ground representation in the form of DEM from these sources is the basis of urban [8,9,10] and peri-urban local flood studies [11]. LiDAR technology have as main advantages of its laser energy penetration to the ground, for instance, through canopies [7]; however, the cost and complexity of the data acquisition involved implies that such airborne data is not always easy to update, or sometimes is only partially available [5].
The development of photogrammetry techniques based on Structure from Motion Multi-View Stereo (SfM-MVS) of images acquired by low-cost cameras in unmanned aerial vehicle systems (micro UAV, ≤ 2 kg) has seen a strong development in the last decade [12]. This, together with the SfM-MVS processing in a single workflow, allows the DSM and DEM generation [13,14]. Although one of its main technical drawbacks is the time required for image processing [15]. However, the (relative) flexibility in image acquisition and the increasing offer of robust SfM-MVS processing software, have made UAV photogrammetry a valid low-cost alternative to piloted airborne LiDAR technology [5,14]. Studies show that image based UAV-derived DEMs are comparable to LiDAR for fluvial flood assessment applications [16,17,18,19], such as flood extent and volume estimations. Leitao et al. [20] showed that is possible to obtain detailed DEMs in urban environments from image based UAV platforms with quality comparable to LIDAR data (in terms of the difference between DEMs), and found that a realistic representation (resolution < 1 m), plays a fundamental role in the surface flow modeling. Therefore, it concludes that micro UAVs are a useful solution for describing urban landscapes. In the literature there are more examples of the functionality of UAV images in 2D urban hydrodynamic modelling [21,22], flood risk management and emergency response [23,24], and mapping of difficult-to-access areas [25]. It is widely accepted that the UAV-derived DEM accuracy from SfM-MVS, i.e., aerial or terrestrial photogrammetry processing, is influenced by flight design and planning factors, such as GSD (ground sample distance), inclusion (or not) of oblique images, sensor and camera lens, flight pattern and georeferencing method, etc. [26]. As a rule of thumb in UAV photogrammetry, vertical accuracy for a DEM obtained must be between one and three times the GSD of input imagery [27,28,29]. The impact of the georeferencing method on the accuracy of SfM-MVS products is critical, and also well established in the literature. Georeferencing is usually classified as: (i) direct, by means of UAV navigation (GPS/ IMU) instruments, and sometimes corrected in real time by GPS-RTK [30,31]; and (ii) indirect, through established ground control points [32]. Usually, the classical indirect georeferencing is considered the most accurate method [33]. Depending on the size of the UAV SfM-MVS project, ground control point determination can become a challenging task due to its time-intensive nature [34], and constraints found on the terrain [14].
Using existing elevation data, for example from airborne LiDAR, can be an alternative for georeferencing a UAV photogrammetric project. The literature shows the complementarity between LiDAR, as an alternative source for ground control points, and photogrammetry of airborne imagery [35,36,37,38]. Liu et al. [35] and James et al. [37] suggested the use of non-physical or virtual control points called "LiDAR-derived control points". This complementarity has been recently exploited with high-resolution imagery by a multi-rotor UAV platforms and terrestrial LiDAR data for 3D city modelling [39]. Persad et al. have proposed the use of LiDAR data and SfM-MVS image processing for modelling applications and DEM generation in a deltaic area [40,41]. However, there is no reference that shows the contribution of LiDAR data in DEM generation from fixed-wing UAV imagery, especially in flood assessment applications for estimation of areas and volumes. To validate the present contribution, it is necessary to compare UAV photogrammetric products with independent external elevation data [42,43] and with standard reference surfaces (e.g., LiDAR) for flood analysis [16,20]. If this complementarity is confirmed, the use of existing airborne remote sensing data (e.g., LiDAR databases) will prove to be an alternative georeferencing method for UAV researchers and flood specialists in order to obtain useful DEMs for local-level studies.
This work aims to investigate if LiDAR elevation data can be used in DEM generation from fixed-wing UAV imagery for flood applications. More specifically, it aims to (i) assess the accuracy achieved in DEM from the SfM-MVS processing chain using LiDAR-derived control points (LCPs); (ii) test the performance of two software applications used for DEM processing and (iii); compare flood estimations of volume and area between DEMs based on UAV and LiDAR data (reference).
This paper is organized as follows: Section 2 describes the equipment employed and the methods followed: UAV surveys, LCP collection, image processing for DSM and DEM generation (SfM software comparisons), flood applications and assessment methods. Section 3 discusses the accuracy of the method, and then, after its validity is confirmed, the flood results. Section 4 is devoted to the discussion, and Section 5 presents the conclusions.

2. Equipment and Methods

2.1. Case Study Description

The broader location of the study area is the Northwest part of the coastal city of Riohacha (Colombian Caribbean), shown in Figure 1. This zone is bounded to the North by the Caribbean Sea, to the East by the Ranchería river delta and to the South and West by the inner-city districts. The area under study is a peri-urban zone with the lowest elevations in m above sea level (a.s.l.) within the city bounds, highly exposed to fluvial and pluvial urban floods. The terrain is characterized by a flat relief with occasional undulations; terrain elevations range from 8 m in the South, down to values near 0 m a.s.l. in the North (the coastline). The choice of the study area is based on two criteria: first, local interest due to its high exposure to severe storm flooding, as occurred in the course of El Niño-La Niña (2010/2011) [44,45]; second, the availability of previous hydrological and LiDAR elevation data, outlined by the red and blue lines in Figure 1. Additionally, the Riohacha context was selected as an emblematic case of emerging cities with high population growth rates and vulnerability to natural hazards thus raising scientific interest as shown in Nardini and Miguez [44].

2.2. Methodology

The methodology consists of four stages (Figure 2): Stage 1 describes the survey planning and LCPs determination; Stage 2 describes the image processing. Stages 3 and 4 are the main focus of this study.

2.2.1. Stage 1. Flight Planning, UAV-Based Surveys and LCPs (Proposed Method)

UAV flights were carried out within the framework of a collaborative humanitarian mapping project, “Mapatón por La Guajira” in February 2016 [46]. The main UAV flight objectives were: (1) To generate an orthophoto for object digitalization under the OpenStreetMap standard; (2) to survey an area greater than the one previously available in elevation and hydrological datasets for UAV flood research (Figure 1). The eBee™ SenseFly UAV platform was used in this study (Appendix A, Table A1). The system is equipped with consumer-grade on-board Global Positioning System (GPS) and inertial measurement Unit (IMU) modules for autonomous navigation and further image geotagging. The payload was a non-metric RGB camera with a CMOS active pixel sensor technology.
In order to find a balance between flight objectives and available funding, 3 flights were agreed upon with the UAV pilot. The resulting GSD for each flight was of ~ 10.3 cm/pixel (∼325 m AGL). A careful local risk analysis was carried out based on the Colombian UAV regulation ([47], September 2015) to assure that the flight design was safe. The assessment determined that the risk was minimal if executed in the early hours of the day when there are no piloted operations, and wind velocities are low. After inspecting the flight area and selecting a safe take-off and landing site, a platform inspection was carried out to rule out in-flight loss of control. After this, the programmed flights were executed by the pilot in accordance with the local UAV regulations. The flight plan execution, in-flight monitoring and camera trigger were managed by the eMotion flight mission software (Lausanne, Switzerland), which is able to handle multiple flights, and further geotag images during post-flight processing [48]. Images and flight data were stored and saved for processing at Stage 2. Technical parameters for the UAV-Based surveys are in Table 1.
Figure 3 shows general methodology of the proposed georeferencing procedure. Control point determination in Stage 1 was in turn divided into two steps.
First, five ground control points (GCPs, Figure 1, green triangles) were determined with a millimetric precision total station (Figure 3a), and were used exclusively for the geometric correction of the orthomosaic by using the SfM processing chain (Figure 2). The orthomosaic resolution was set to 1 × GSD (10.3 cm/pixel); a maximum of 20.6 cm in horizontal error is assumed (2 × GSD) [26]. The georeferenced orthomosaic was used as input to extract positional X,Y control point coordinates.
Second, with the aim of densifying well-defined control points, manual identification of surface features in the generated orthomosaic was performed, with the support of a shaded relief map [37] (Figure 3b). Positional X,Y control point coordinates of surface features were determined using ArcMap, from ArcGIS® Software (Redlands, CA, USA). These control points were all characterized by being located on plain terrain, easily recognizable, and were taken as stable over the period between the capture of the LiDAR (2008) and UAV datasets (2016). Altimetric Z values of final LCPs were extracted from the reference LiDAR DEM (2008) for each control point (X,Y) coordinate labeled. The ArcGIS® “Extract values to points” tool (with the option “Interpolate values at the point locations” selected), was used to assign the Z value for each control point. The total number of LCPs with known X,Y,Z coordinates distributed in the study area was set to 13; this value is near the recommended range suggested in the literature [16,49]. Lastly, these 13 LCPs were used in Stage 2 to generate the final DEMs (and DSMs) from SfM-MVS UAV-based image processing.
The LiDAR dataset was acquired in 2008 by the Colombian maritime national authority, DIMAR, for a coastline survey. The sensor used was a Leica ALS540 mounted on a Cessna 402B piloted platform, flying at an altitude of approximately 900 m AGL [50]. The horizontal LiDAR nominal point spacing reported was between 1 and 1.3 m, with a final density of approximately 0.7 points/m2. A “Model key point” ground classification was performed with the proprietary LiDAR contractor software MARS® [51], with a density of around 0.25 points/m2. Based on the above metadata, we assume that the LiDAR dataset falls within the vertical accuracy of ASPRS 20 cm class [42] and quality level QL3 (≤ 20 cm RMSE), according to USGS specifications [52]. The expected accuracy of modern LiDAR (e.g., vertical < 10 cm RMSE, see Zhang et al. [53]), should fall within the high accuracy classes by USGS and ASPRS standards. The description of the orthometric height correction of the 1 m TIN-based LiDAR DEM (and DSM) are found in Escobar et al. [54]. The LiDAR-based DEM (as well as the DSM), was also included in the assessment in order to verify its accuracy as control surface for LCP extraction.

2.2.2. Stage 2. Photogrammetric Processing of UAV-Based Imaging for DEM Generation

The Structure from Motion (SfM) algorithms of Multiple-View Stereo (MVS) images were used to process UAV-based images using LCPs. SfM-MVS photogrammetry is an automated image-based procedure that simultaneously determines 3D coordinates (structure) by the motion of the camera and is used for UAV-image based photogrammetry [15,27,55]. In contrast to conventional photogrammetry, SfM techniques allow the determination of internal camera parameters, as well as its pose, by using direct georeferencing [56]. In the present study, the most common SfM-MVS software suites were used: Agisoft PhotoScan® Professional v.1.1.2, and Pix4Dmapper® Pro v.4.1.23 [15,56]. Both are commercial packages that use similar algorithms [57,58], however, the motivations in this paper were to estimate the vertical precision and explore the differences in the processing and interaction of the user with the software. This was one of the fundamental reasons why these SfM-MVS tools were chosen. The suites use modified known algorithms similar to the widely used SIFT for feature matching and key-point extraction [59]. Usually, the exterior orientation is possible by employing geotagged data to perform a bundle block adjustment and an iterative Newton’s computer vision method (i.e., Gauss-Markov [60]). The georeferenced sparse point cloud is obtained first, then re-optimized with external control points (absolute georeferencing), and later densified by custom pixel matching autocorrelation MVS algorithms [61]. Details of the hardware used for image processing are in Appendix A (Table A1). The purpose of using both SfM-MVS suites is to generate accurate UAV-derived DEMs (and DSMs) employing the same input dataset of geotagged imagery and control points (LCPs). For the above, two strategies were tested: (a) PhotoScan used as a semi-automated chain process; (b) Pix4D used as a fully-automated process. The general photogrammetric processing is shown in Figure 2 (Stage 2). During the sparse cloud generation, no manual edition of outliers and wrong located points was done. The set of 13 LCPs (Figure 3b) was tagged manually on the imagery dataset for the re-optimization of the initial photogrammetric procedure. Processing settings for both strategies are shown in Table 2.
Details for each strategy for DSM and DEM generation are described as follows:
PhotoScan: The workflow followed was Agisoft’s protocol for DEM generation [62]. Based on the estimated camera positions, PhotoScan calculates depth information for each camera to be combined into a single dense point cloud [57]. Dense cloud points corresponding to permanent water bodies were edited out. The ground filtering algorithm applied was the two-step approach based on the adaptive TIN algorithm described by Axelsson [63]. This algorithm divides data into square grids wherein a temporary TIN model is created first, and then densified with new points by calculating distance and angle parameters. The process is iterated, guided by user criteria, until all points of the terrain model are classified by adjusting the settings of the parameters shown under”Classifying dense cloud points” in Table 2. DSMs and DEMs were rasterized based on mesh data. The final raster DEM was created from dense classified “ground” points.
Pix4Dmapper: The software’s graphic user interface allows to follow the workflow in three steps [58]. The initial step involves a fully automatic iterative proprietary algorithm for bundle block adjustment and sparse cloud point generation (Step 1). A custom MVS matching algorithm is applied for sparse cloud densification (Step 2), and then, the inverse distance weighting algorithm (IDW) interpolation is used for DSM generation (Step 3). The raster DEM was generated by selecting the “Point cloud classification” parameter (Table 2). Terrain extraction and DEM generation is based on fully automatic custom machine learning algorithms that classify the dense point cloud generated in typical semantic class labels, e.g., bare earth, buildings, vegetation and roads [64]. The user has no prior control in the training of the classification algorithms.

2.2.3. Stage 3. UAV-Derived DEM Accuracy Assessment

To assess how well the UAV-derived DEMs obtained in Stage 2 represent the ground truth, a number of high precision observations (n = 104) were employed. This dataset (2018) was based on traditional surveying (millimeter range precision) made available from the city’s urban planning office (data from a replacement sewage pipe project [65]). The location of these checkpoints is distributed in the south and north zones of the study area, and is limited to those areas with less probability of variation (Figure 4), mainly road or street intersections (yellow lines).
This elevation dataset is taken as independent ground truth data in order to perform statistics of the difference (ΔZ) from the observed value of the checkpoint (Z) and the ones from UAV-derived DSMs and DEMs (PhotoScan, Pix4D). The Z elevation from the checkpoint, as well as those from the UAV and LiDAR-based models were extracted for the same horizontal checkpoint location using the ArcGIS extraction toolset “Extract multi values to points”. To make the comparison of elevations in models possible, the MAGNA-SIRGAS (EPSG 3116) coordinate system reference was applied to all data sets.
Two methods were tested to assess DSM and DEM accuracy. The first method is based on the root mean square error, RMSE, which is commonly used under the assumption that the set of {ΔZ} is normally distributed and is located over open areas not prone to outlier influence [42]. The second method is a measure of accuracy based on robust estimators, suggested in Höhle et al. [43,66], the Normalized Median Absolute Deviation (NMAD):
N M A D   = 1.4826   ·   m e d i a n i ( | Δ Z i   m Δ Z | ) ,
where mΔZ is the median of the errors, and ΔZi are the individual errors. NMAD is thus proportional to the median of the absolute differences between errors and the median error. It is a measure of accuracy that does not require a priori knowledge of the error distribution function [67] and especially useful in non-open and built-up terrain [43].
Basic statistical analysis for ΔZ, such as mean, standard deviation and median were also employed, together with histograms and box plots; normality tests for error distributions (ΔZ residuals) were performed (Shapiro–Wilk test).
In order to compare our DEM performance with similar studies, the assessed vertical error-to-GSD ratio was determined. This ratio is somewhat of a rule of thumb in the UAV photogrammetry literature based on RMSE. Its values range from approximately 1 to 3 times the GSD for correctly reconstructed models [27,68]. Assuming that the LiDAR reference accuracy from where the LCPs were extracted is ≤ 20 cm (Section 2.2.1), the expected absolute accuracy (RMSE) for the obtained models are within the range of 20.6 – 30.9 cm (from 2 to 3 times the GSD). Likewise, expected accuracy ranges based on NMAD are given by Bühler [29] and Zazo [67]. Therefore, the expected ranges are between 11.5–23.7 cm (i.e., relative accuracy from 1.1 to 2.3 times the GSD).

2.2.4. Stage 4. Flood Estimations from UAV-Derived DEMs

Finally, to test the performance of flood estimations of UAV-derived DEMs with respect to the LiDAR reference [16,20], two criteria were considered: first, a comparison based on the calculation of flood volume (V) and area (A) for a historical extreme event; second, a comparison based on the similarity of flood accumulation.
First: The percentage differences (errors) in area and volume (VDIF, ADIF) of UAV-based DEMs were calculated with respect to the LiDAR-based DEM for a local extreme flood event, according to Equations (3) and (4) [11]:
V D I F =     | V P h o t o S c a n , P i x 4 D   V L i D A R | V L i D A R   100 %
A D I F =     | A   P h o t o S c a n , P i x 4 D   A   L i D A R | A L i D A R   100 %
Second: Another criterion used to test the performance of flood estimation is based on the similarity of areas and volumes between the LiDAR reference and UAV-derived DEMs for each sub-basin. A and V were estimated for 10 depth-filling time interval (timesteps). These timesteps were defined as blocks of discrete filling simulations (at a fixed interval), where the flood elevation was increased in each DEM until it reached its maximum for the extreme flood event. The 10 time steps were compared to the LiDAR reference using the Bray-Curtis (Sørensen) index [69] for each DEM. These normalized values range from 0 to 1, where 0 represents exact agreement between two flood estimation datasets.
The spatial modelling tool used to estimate flood volume and area for a given UAV-derived DEM was the r.lake.xy hydrological module from GRASS GIS [70]. This module fills the DEM with a water elevation (H) at a seed point. The seed point elevation (Zsp) for a given flood elevation H (Zsp + h) is approximately located at the lowest point for each DEM. The flood depth (h) was obtained from hydrodynamics simulations for a historical flood (which occurred on 18/09/2011) performed by a MODCEL© model [44,71]. Flood volumes and areas were estimated for each DEM sub-basin (Figure 1). The corresponding H inputs can be seen in Table 3.

3. Results

3.1. UAV-Derived DEM Accuracy Assessment

As can be seen in Figure 5, the errors obtained for the considered models (LiDAR, PhotoScan and Pix4D) are greater for DSMs than for DEMs, with the highest errors obtained with PhotoScan. This difference in dispersion values may be due to reprojection errors (1.97 in PhotoScan vs. 0.19 in Pix4D) and camera optimization (1.90% in Photoscan vs. 0.21% in Pix4D).
The absolute value of the average error for the considered models is below 10 cm, except for the DSM obtained from PhotoScan, which is 16.5 cm (Figure 5 and Figure 6), although the sign of the average value indicates that the models are either above or below the employed checkpoints. In addition, normality tests indicate that errors do not follow a normal distribution (except PhotoScan DEM), which implies that the RMSE estimators are in this case not suitable. From this, it can be considered that the obtained mean value of the error is underestimated. It is kept here for comparison only. Given the lack of normality of the errors, robust estimators are applied (Table 4).
The median error for the different models considered is between 10 and 20 cm, being in all cases positive, which implies that the models are below the employed ground truth checkpoints. As for the mean error, the highest median is obtained for the PhotoScan DSM. For NMAD the obtained values are similar for all models, although slightly higher for the DSMs. Values obtained by robust estimators are adequate for built-up areas, and their use is preferred over standard estimators.
Figure 7 compares absolute and relative values of the expected accuracy of each UAV model with the standard LiDAR reference. For DSMs the accuracy value of PhotoScan is lower than that of Pix4D but is found to be near the expected range. For DEMs the values are similar for all models and the accuracy of PhotoScan is remarkably improved. The average accuracy values lie within the expected range and are close to the observed ones in the LiDAR reference. DEM accuracy can be mostly explained by differences in the operator's interaction with the SfM-MVS software.
Since LiDAR DEM vertical accuracy is 18 cm, it falls within ASPRS/USGS standards, and is also similar to an empirical assessment by Hodgson [72]. Furthermore, based on the accuracy of our results, we can conclude that the UAV photogrammetry georeferencing method applied is valid for DEM generation.

3.2. Flood Estimations from UAV-Derived DEMs

Table 5 shows the results of the flood estimates, measured by volume and area, for each of the sub-basins and DEM considered (LiDAR reference, PhotoScan and Pix4D SfM-MVS), and their corresponding area and volume error (Equations (2) and (3)). It is clear that, on average, flooded volumes and areas obtained for PhotoScan DEM are closer to those of LiDAR. Table 6 summarizes the estimates of similarity for the progression of the flood. The observed results confirm the above, since the progression of the flood for PhotoScan DEM is closer to the reference than Pix4D’s.
Figure 8 shows the flood maps for the different methods, for sub-basins 704 and 603. Figure 9 shows the corresponding time evolution of the flood. Flood maps for Sub-Basin 704 show that LiDAR shares features with both, Pix4D and PhotoScan: Whereas for Pix4D the flooding occurs mainly along the streets, for PhotoScan it forms a broad water surface.
Figure 9 shows that for the Pix4D model in Sub-Basin 704 the volumes obtained for each timestep are closer to those of the reference, although slightly higher. For PhotoScan, flooded volume estimations are always below the reference. In Sub-basin 603, where the pond is located, the flood extent of the Pix4D model is reduced to the water body, whereas PhotoScan’s is much closer to the reference.
The main observed differences between DEMs are explained by the method of classifying and editing the dense point cloud. Models generated in a semi-automatic way (PhotoScan), where operator intervention is important, produce results much closer to the LiDAR reference than where the process is fully automatic (i.e., Pix4D). It can be seen that for Pix4D DEMs, part of the infrastructure (buildings, for example) remains in the final DEM.

4. Discussion

Our results show that airborne LiDAR-derived control points are useful in obtaining accurate DEMs from UAV-based RGB imaging, with a resolution of two times the pixel size of input imagery. PhotoScan offers better interactivity, especially in DEM generation. Although its DSM accuracy turned out slightly inferior than Pix4D's, it was compensated when DEM is generated. The UAV-based DEMs are in fact as accurate as LiDAR DEMs, and this is in agreement with the work of Polat and Uysal [5]. In general, DEMs obtained by a SfM-MVS processing chain are within the expected ranges reported in the literature (Figure 8). This is also confirmed when comparing the relative accuracies with the ones in Table A2 in the Appendix A. Therefore, the input of control points from airborne LiDAR to SfM-MVS processing of fixed-wing UAV imaging is justified [35,36,37]. Furthermore, our results contribute to broadening UAV photogrammetry applications when the determination of control points is a burden, for example, in emergency situations [23,24]. It also enables exploiting automatic integration, as shown in the literature [38,41,73,74]. In addition, it makes quick and efficient DEM generation possible, as well as to carry out multitemporal analysis, which is one of the main advantages of UAV platforms [75]. Finally, based on the trends of the abovementioned literature and results, the increasing offer of geospatial products is promising, especially in order to achieve UN Sustainable Development Goal 11, “Sustainable Cities and Communities” by 2030 [2].
Important limitations for the replication of the described method are the current international regulations for civil UAV operation, in particular, the flight altitude. However, by reducing it, similar or better accuracies (in relative terms) as those reported in the literature are to be expected at the expense of a smaller area coverage per flight (Appendix A, Table A2), and therefore, the need for longer flights [76]. This requires that the LiDAR reference and the pixel size of the UAV images maintain a relative accuracy of at least 2:1. For example, for a 5 cm pixel (~150 m AGL using the same equipment), the LiDAR must have a vertical accuracy equal to or less than 10 cm. The ever-increasing availability of terrestrial LiDAR elevation data can become an additional source of control points for SfM-MVS UAV photogrammetry, as has been recently shown in the literature [39,77].
Results for flood estimations compared to LiDAR show the usefulness of DEMs generated from SfM-MVS dense point clouds, when the user is actively involved in their classification. These findings are in agreement with those of Leitão et al. [20], Coveney et al. [16] and Schumann et al. [17], who based their comparisons on previous reference LiDAR surfaces. Outcomes of flood analysis showed the suitability of using DEMs from SfM-MVS as a tool to support local flood studies in urban catchments or peri-urban floodplains [75]. Specifically, this allows obtaining useful input elevation data for 2D hydrodynamic modelling in urban areas, as suggested by Yalcin et al. [21]. On the other hand, the estimation of extreme flood events makes it possible to investigate the generated DEM beyond the streets where the precise altimetric information was available. This warrants carrying out a more general UAV DEM assessment.
While discrepancies are evident between UAV models and the reference LiDAR, they are due to the DEM generation strategy, which is highly sensitive to the filtering method for ground point extraction and dense cloud edition. Flood map outputs show that for Pix4D DEM there exists a tendency of rendering a certain residual urban fabric, owing to deficient quality of the determination of the DEM by the software (Figure 8). The inclusion of residual urban fabric in the Pix4D-derived DEM influences the flood extent by volume displacement. Consequently, in Pix4D DEM, flooding tends to propagate along the streets, in contrast to PhotoScan DTM, where a broad water surface is observed. This agrees with the conclusions of Shaad et al. [78] and Hashemi-Beni et al. [23], who showed that fully automated ground extraction algorithms generate worse flood estimates than those obtained from a manual or semi-automated classification. A thorough user knowledge of the area, together with the availability of additional field data (profiles and/or observations of flood depths) is essential to ensure adequate utilization of ground filtering algorithms [79]. Discrepancies between flood assessments may also be due to the use of an outdated reference surface (e.g., LiDAR reference in 2008 vs. UAV surveys in 2016, as in our case), particularly in low-lying areas, where terrain variations or changes in hydraulic infrastructures (e.g., in channels or near box culverts) play an important role in flood propagation. The above suggests an opportunity to study terrain evolution with high-resolution UAV surveys [19].
This paper only solved the processing of UAV raw data, but differences in the resolution of UAV DEM and reference LiDAR could have an impact on flood estimates [16,20]. Further works might focus on finding an optimal DEM resolution by resampling methods for flood comparisons between UAV and LIDAR data. Additionally, the elevation dataset of the presented case study can be applied for the implementation of a local early warning system to estimate possible flood volume detection and water distribution in micro-morphology of streets.
The main UAV advantage is their flexibility to acquire image data [20], especially for small to medium size areas (< 1 km2, up to 7 km2, see Table A2 in the Appendix A). On the other hand, the major disadvantages of UAV technology include a limited coverage area (e.g., flight time, payload and weather conditions) and the requirements for data processing. Piloted airborne platforms are more suited up to national scale surveys, while UAV is naturally better suited for an urban sub-basin scale. From the viewpoint of data processing, the larger size UAV project, the longer the time to processing. Processing time effort is about 45% of the UAV workflow [27], contrary to airborne LiDAR, in which 3D data is obtained automatically.
Economic analyses by Jeunnette and Hart [80] concludes that piloted platforms lead to a lower cost of operation at 610 m AGL. The UAVs would become cost-competitive at approximately 305 m, flight height close to that used in the present study (~325 m AGL). Yurtseven [76] confirms the above at 350 m AGL, and also, it would be providing reasonable vertical accuracy and minimize the potential for systematic errors such as "doming effect" on the elevation products.
The increasing operational capabilities of civil micro UAV will doubtlessly integrate with other technologies, as was here shown with LiDAR. However, current aviation safety regulations often pose limitations to research endeavors, especially because regulations seldom keep pace with technological development [47]. This means that UAV operators, society, authorities and industry have to continue working together towards a continuous improvement of local regulations [81,82]. It is expected that innovations in UAV safety will allow the next generation of fully integrated platforms to enter the airspace [83].
Finally, the use of LCPs as proposed in this work and according to James et al. [37], might involve important limitations, including: (i) availability of LiDAR data, some regions have either partial coverage or none at all, (ii) resolution of LIDAR data should be sufficiently detailed to allow the human operator to identify the superficial features, (iii) loss of information due to interpolation from raw point cloud data to grid data, and (iv) possible variations due to the temporal differences between LIDAR data acquisition and UAV surveys.

5. Conclusions

In this study the contribution of existing altimetric data of airborne LiDAR in DEM generation from UAV-based images (10.3 cm pixel size) for flood applications was investigated. Georeferencing based on LiDAR-derived control points has been applied, DEM accuracy assessed, and further applied to flood estimations. Floods from the corresponding UAV DEM were compared to those from a LiDAR DEM reference.
The applied LCP georeferencing method contributes to obtaining DEMs with vertical accuracies comparable to those found in the literature, of approximately 2 times the pixel size of the input imagery. The DSM obtained with Pix4D is slightly more accurate than PhotoScan. However, the PhotoScan DEM is closer to the reference LiDAR, and therefore, more suitable for flood assessment applications (volume and area flooded estimations). In general, the feasibility of semi-automatically obtained UAV DEMs is confirmed. The hereby proved complementary nature between LiDAR and SfM-MVS photogrammetry will provide terrain modelers and flood scientists with an alternative tool for georeferencing their UAV (e.g., fixed-wing) photogrammetric products, in particular when ground control point determination is challenging.
The expected applications of micro-UAV systems and the increasing supply of LiDAR datasets are promising for floods studies at the local level. Future work might focus on assessing the DEM accuracy of detailed terrestrial LiDAR-georeferenced UAV flights, and testing the impact of spatial resolution on flood estimates.

Author Contributions

Conceptualization, J.R.E.V.; Formal analysis, J.R.E.V. and L.I.M.; Investigation, J.R.E.V.; Methodology, L.I.M.; Supervision, J.I.P.M.; Writing – original draft, J.R.E.V.; Writing – review & editing, L.I.M. and J.I.P.M.

Funding

This research was funded by a scholarship of the Universidad de La Guajira (Colombia) awarded to the first Autor (Nº PFAN 019-2014).

Acknowledgments

Special thanks go to OpenStreetMap and EcoExplora consulting (donation of UAV surveys); CREACUA Foundation (flood data); Riohacha Municipality (Topographic surveying and LiDAR datasets); and to the anonymous reviewers for their valuable comments.

Conflicts of Interest

The authors declare that there are no conflicts of interest.

Appendix A

Table A1. Technical specs of the micro-UAV fixed-wing system and hardware for image processing
Table A1. Technical specs of the micro-UAV fixed-wing system and hardware for image processing
AircraftSpecs
Model/ wingspan“eBee ™ SenseFly drone mapping”/ 0.96 m (delta type)
Weight (include battery + sensor)approx. 0.7 kg (micro-UAV weight criteria found in Hassanalian et al., [12])
Cruise speed/ Wind resistance40-90 km/h/ Up to 45 km/h
Maximum flight time/Radio Linkup to 50 min (depending on climate factors, as wind velocity)/ up to 3 km
Camera RGB
ModelCanon IXUS 127 with CMOS imaging sensor technology
Sensor resolution/ Shutter speed~16 million pixels (4608 × 3456)/ 1/2000 seg
Focal length/ sensor size4.3 mm (35 mm film equivalent: 24 mm)/6.16 × 4.62 mm
Processing Hardware (CPU/GPU)Intel® Core™ i7-6700 HQ CPU @2.60GHz RAM: 32GB /Intel® HD Graphics 530
Table A2. Related studies of DSM and DEM accuracy reported (absolute and relative).
Table A2. Related studies of DSM and DEM accuracy reported (absolute and relative).
AutorPlatformGSD (cm/pix)Area (km2)NMAD (cm)NMAD:GSD Ratio
Present studyeBee™10.35.3718.5–25.51.8–2.5 (2.15)1
Zazo et al., 2018 [67]manned ultra-light motor 2.60.7862.3
Brunier et al., 2016 [84]Manned Savannah ICP 3.35No data6.962.1
Bühler et al., 2015 [29]utility aircraft25145281.1
RMSE (cm)RMSE:GSD ratio
Present studyeBee™10.35.3723.0–46.22.2–4.5 (3.4)
Hugenholtz et al., 2013 [85]RQ-84Z AreoHawk 101.95292.9
Hugenholtz et al., 2016 [31]eBee™ RTK 5.20.485.7–7.21.1–1.4 (1.2)
Roze et al., 2014 [48]eBee™ RTK2.5–5.00.23.1–7.01.2–1.4 (1.3)
Benassi et al., 2017 [86]eBee™ RTK~2.00.252.0–10.01.0–5.0 (3.0)
Leitão et al., 2016 [20]eBee™ 2.5–10.00.039No dataNo data
Gindraux et al., 2017 [87]eBee™61.4–6.910–251.7–4.2 (2.9)
Immerzeel et al., 2014 [88]Swinglet CAM™ 3–53.75No dataNo data
Yilmaz et al., 2018 [89]Gatewing™ X100 5.0–6.0 32.05.0 – 33.01.0–5.5 (3.3)
Coveney et al., 2017 [16]Swinglet CAM™ 3.50.2992.6
Langhammer et al., 2017 [19]Mikrokopter® Hexa 2 1.5~0.14 32.51.7
Gbenga et al., 2017 [90]DJI™ Phantom 2 2 10.910.8146.874.3
1Average ratios are in brackets (NMAD/RMSE: GSD). 2 Multirotor. 3 Estimated from flight planning.

References

  1. Kundzewicz, Z.W.; Kanae, S.; Seneviratne, S.I.; Handmer, J.; Nicholls, N.; Peduzzi, P.; Mechler, R.; Bouwer, L.M.; Arnell, N.; Mach, K.; et al. Flood risk and climate change: Global and regional perspectives. Hydrol. Sci. J. 2014, 59, 1–28. [Google Scholar] [CrossRef]
  2. United Nations Sustainable Development Goals. Available online: https://www.un.org/sustainabledevelopment/cities/ (accessed on 29 May 2019).
  3. Hafezi, M.; Sahin, O.; Stewart, R.A.; Mackey, B. Creating a novel multi-layered integrative climate change adaptation planning approach using a systematic literature review. Sustainability 2018, 10, 4100. [Google Scholar] [CrossRef]
  4. Wang, Y.; Chen, A.S.; Fu, G.; Djordjević, S.; Zhang, C.; Savić, D.A. An integrated framework for high-resolution urban flood modelling considering multiple information sources and urban features. Environ. Model. Softw. 2018, 107, 85–95. [Google Scholar] [CrossRef]
  5. Polat, N.; Uysal, M. An experimental analysis of digital elevation models generated with Lidar Data and UAV photogrammetry. J. Indian Soc. Remote Sens. 2018, 46, 1135–1142. [Google Scholar] [CrossRef]
  6. Chen, Z.; Gao, B.; Devereux, B. State-of-the-Art: DTM generation using airborne LIDAR Data. Sensors 2017, 17, 150. [Google Scholar] [CrossRef] [PubMed]
  7. Liu, X. Airborne LiDAR for DEM generation: Some critical issues. Prog. Phys. Geogr. 2008, 32, 31–49. [Google Scholar]
  8. Wedajo, G.K. LiDAR DEM Data for flood mapping and assessment; opportunities and challenges: A Review. J. Remote Sens. GIS 2017, 6, 2015–2018. [Google Scholar] [CrossRef]
  9. Arrighi, C.; Campo, L. Effects of digital terrain model uncertainties on high-resolution urban flood damage assessment. J. Flood Risk Manag. 2019, e12530. [Google Scholar] [CrossRef]
  10. Bermúdez, M.; Zischg, A.P. Sensitivity of flood loss estimates to building representation and flow depth attribution methods in micro-scale flood modelling. Nat. Hazards 2018, 92, 1633–1648. [Google Scholar] [CrossRef] [Green Version]
  11. Laks, I.; Sojka, M.; Walczak, Z.; Wróżyński, R. Possibilities of using low quality digital elevation models of floodplains in Hydraulic numerical models. Water 2017, 9, 283. [Google Scholar] [CrossRef]
  12. Hassanalian, M.; Abdelkefi, A. Classifications, applications, and design challenges of drones: A review. Prog. Aerosp. Sci. 2017, 91, 99–131. [Google Scholar] [CrossRef]
  13. Fonstad, M.A.; Dietrich, J.T.; Courville, B.C.; Jensen, J.L.; Carbonneau, P.E. Topographic structure from motion: A new development in photogrammetric measurement. Earth Surf. Process. Landforms 2013, 38, 421–430. [Google Scholar] [CrossRef]
  14. Singh, K.K.; Frazier, A.E. A meta-analysis and review of unmanned aircraft system (UAS) imagery for terrestrial applications. Int. J. Remote Sens. 2018, 39, 5078–5098. [Google Scholar] [CrossRef]
  15. Remondino, F.; Nocerino, E.; Toschi, I.; Menna, F. A critical review of automated photogrammetric processing of large datasets. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. ISPRS Arch. 2017, 42, 591–599. [Google Scholar] [CrossRef]
  16. Coveney, S.; Roberts, K. Lightweight UAV digital elevation models and orthoimagery for environmental applications: Data accuracy evaluation and potential for river flood risk modelling. Int. J. Remote Sens. 2017, 38, 3159–3180. [Google Scholar] [CrossRef]
  17. Schumann, G.J.P.; Muhlhausen, J.; Andreadis, K.M.; Schumann, G.J.P.; Muhlhausen, J.; Andreadis, K.M. Rapid mapping of small-scale river-floodplain environments using UAV SfM supports classical theory. Remote Sens. 2019, 11, 982. [Google Scholar] [CrossRef]
  18. Izumida, A.; Uchiyama, S.; Sugai, T. Application of UAV-SfM photogrammetry and aerial LiDAR to a disastrous flood: Multitemporal topographic measurement of a newly formed crevasse splay of the Kinu River, central Japan. Nat. Hazards Earth Syst. Sci. Discuss. 2017, 17, 1505. [Google Scholar] [CrossRef]
  19. Langhammer, J.; Bernsteinová, J.; Mirijovský, J. Building a high-precision 2D hydrodynamic flood model using UAV Photogrammetry and Sensor Network Monitoring. Water 2017, 9, 861. [Google Scholar] [CrossRef]
  20. Leitão, J.P.; Moy de Vitry, M.; Scheidegger, A.; Rieckermann, J. Assessing the quality of digital elevation models obtained from mini unmanned aerial vehicles for overland flow modelling in urban areas. Hydrol. Earth Syst. Sci. 2016, 20, 1637–1653. [Google Scholar] [CrossRef] [Green Version]
  21. Yalcin, E. Two-dimensional hydrodynamic modelling for urban flood risk assessment using unmanned aerial vehicle imagery: A case study of Kirsehir, Turkey. J. Flood Risk Manag. 2018, e12499. [Google Scholar] [CrossRef]
  22. Rinaldi, P.; Larrabide, I.; D’Amato, J.P. Drone based DSM reconstruction for flood simulations in small areas: A pilot study. In World Conference on Information Systems and Technologies; Springer: Cham, Switzerland, 2019; pp. 758–764. [Google Scholar]
  23. Hashemi-Beni, L.; Jones, J.; Thompson, G.; Johnson, C.; Gebrehiwot, A. Challenges and opportunities for UAV-based digital elevation model generation for flood-risk management: A case of princeville, north carolina. Sensors 2018, 18, 3843. [Google Scholar] [CrossRef] [PubMed]
  24. Boccardo, P.; Chiabrando, F.; Dutto, F.; Tonolo, F.G.; Lingua, A. UAV deployment exercise for mapping purposes: Evaluation of emergency response applications. Sensors 2015, 15, 15717–15737. [Google Scholar] [CrossRef] [PubMed]
  25. Şerban, G.; Rus, I.; Vele, D.; Breţcan, P.; Alexe, M.; Petrea, D. Flood-prone area delimitation using UAV technology, in the areas hard-to-reach for classic aircrafts: Case study in the north-east of Apuseni Mountains, Transylvania. Nat. Hazards 2016, 82, 1817–1832. [Google Scholar] [CrossRef]
  26. Manfreda, S.; Herban, S.; Arranz Justel, J.; Perks, M.; Mullerova, J.; Dvorak, P.; Vuono, P. Assessing the Accuracy of Digital Surface Models Derived from Optical Imagery Acquired with Unmanned Aerial Systems. Drones 2019, 3, 15. [Google Scholar] [CrossRef]
  27. Nex, F.; Remondino, F. UAV for 3D mapping applications: A review. Appl. Geomat. 2014, 6, 1–15. [Google Scholar] [CrossRef]
  28. Draeyer, B.; Strecha, C. Pix4D White Paper-How Accurate Are UAV Surveying Methods; Pix4D White Paper: Lausanne, Switzerland, 2014. [Google Scholar]
  29. Bühler, Y.; Marty, M.; Egli, L.; Veitinger, J.; Jonas, T.; Thee, P.; Ginzler, C. Snow depth mapping in high-alpine catchments using digital photogrammetry. Cryosphere 2015, 9, 229–243. [Google Scholar] [CrossRef] [Green Version]
  30. Carbonneau, P.E.; Dietrich, J.T. Cost-effective non-metric photogrammetry from consumer-grade sUAS: Implications for direct georeferencing of structure from motion photogrammetry. Earth Surf. Process. Landforms 2017, 42, 473–486. [Google Scholar] [CrossRef]
  31. Hugenholtz, C.; Brown, O.; Walker, J.; Barchyn, T.; Nesbit, P.; Kucharczyk, M.; Myshak, S. Spatial accuracy of UAV-derived orthoimagery and topography: Comparing photogrammetric models processed with direct geo-referencing and ground control points. Geomatica 2016, 70, 21–30. [Google Scholar] [CrossRef]
  32. James, M.R.; Robson, S.; D’Oleire-Oltmanns, S.; Niethammer, U. Optimising UAV topographic surveys processed with structure-from-motion: Ground control quality, quantity and bundle adjustment. Geomorphology 2017, 280, 51–66. [Google Scholar] [CrossRef] [Green Version]
  33. James, M.R.; Robson, S.; Smith, M.W. 3-D uncertainty-based topographic change detection with structure-from-motion photogrammetry: Precision maps for ground control and directly georeferenced surveys. Earth Surf. Process. Landforms 2017, 42, 1769–1788. [Google Scholar] [CrossRef]
  34. Tonkin, T.N.; Midgley, N.G. Ground-control networks for image based surface reconstruction: An investigation of optimum survey designs using UAV derived imagery and structure-from-motion photogrammetry. Remote Sens. 2016, 8, 786. [Google Scholar] [CrossRef]
  35. Liu, X.; Zhang, Z.; Peterson, J.; Chandra, S. LiDAR-derived high quality ground control information and DEM for image orthorectification. Geoinformatica 2007, 11, 37–53. [Google Scholar] [CrossRef]
  36. Mitishita, E.; Habib, A.; Centeno, J.; Machado, A.; Lay, J.; Wong, C. Photogrammetric and Lidar Data Integration Using the Centroid of Rectangular Roof as a Control Point. Photogramm. Rec. 2008, 23, 19–35. [Google Scholar] [CrossRef]
  37. James, T.D.; Murray, T.; Barrand, N.E.; Barr, S.L. Extracting photogrammetric ground control from LiDAR DEMs for change detection. Photogramm. Rec. 2006, 21, 312–328. [Google Scholar] [CrossRef]
  38. Gneeniss, A.S.; Mills, J.P.; Miller, P.E. Reference Lidar Surfaces for Enhanced Aerial Triangulation and Camera Calibration. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, 1, 111–116. [Google Scholar] [CrossRef]
  39. Gruen, A.; Huang, X.; Qin, R.; Du, T.; Fang, W.; Boavida, J.; Oliveira, A. Joint Processing of UAV Imagery and Terrestrial Mobile Mapping System Data for Very High Resolution City Modeling. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, XL-1/W2, 4–6. [Google Scholar] [CrossRef]
  40. Persad, R.A.; Armenakis, C. Alignment of Point Cloud DMSs from TLS and UAV Platforms. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, 40, 369–373. [Google Scholar] [CrossRef]
  41. Persad, R.A.; Armenakis, C.; Hopkinson, C.; Brisco, B. Automatic registration of 3-D point clouds from UAS and airborne LiDAR platforms. J. Unmanned Veh. Syst. 2017, 5, 159–177. [Google Scholar]
  42. Abdullah, Q.; Maune, D.; Smith, D.; Heidemann, H.K. New Standard for New Era: Overview of the 2015 ASPRS Positional Accuracy Standards for Digital Geospatial Data. Photogramm. Eng. Remote Sens. 2015, 81, 173–176. [Google Scholar]
  43. Höhle, J.; Höhle, M. Accuracy assessment of digital elevation models by means of robust statistical methods. ISPRS J. Photogramm. Remote Sens. 2009, 64, 398–406. [Google Scholar] [CrossRef] [Green Version]
  44. Nardini, A.; Miguez, M.G. An integrated plan to sustainably enable the City of Riohacha (Colombia) to cope with increasing urban flooding, while improving its environmental setting. Sustainability 2016, 8, 198. [Google Scholar] [CrossRef]
  45. Nardini, A.; Cardenas Mercado, L.; Perez Montiel, J. MODCEL vs. IBER: A comparison of flooding models in Riohacha, a coastal town of La Guajira, Colombia. Contemp. Eng. Sci. 2018, 11, 3253–3266. [Google Scholar] [CrossRef]
  46. OpenStreetMap Colombia, Mapatón Por La Guajira—OpenStreetMap Colombia. Available online: https://openstreetmapcolombia.github.io/2016/03/23/reporte/ (accessed on 5 February 2019).
  47. Stöcker, C.; Bennett, R.; Nex, F.; Gerke, M.; Zevenbergen, J. Review of the current state of UAV regulations. Remote Sens. 2017, 9, 459. [Google Scholar] [CrossRef]
  48. Roze, A.; Zufferey, J.C.; Beyeler, A.; Mcclellan, A. eBee RTK Accuracy Assessment; White Paper: Lausanne, Switzerland, 2014. [Google Scholar]
  49. Agüera-Vega, F.; Carvajal-Ramírez, F.; Martínez-Carricondo, P. Assessment of photogrammetric mapping accuracy based on variation ground control points number using unmanned aerial vehicle. Meas. J. Int. Meas. Confed. 2017, 98, 221–227. [Google Scholar] [CrossRef]
  50. Corbley, K. Merrick Extends Life of LiDAR Sensor by Modifying Flight Operations. Leica ALS40 Contributes to Colombian Market and History. LiDAR Mag. 2014, 4, 6. [Google Scholar]
  51. Afanador Franco, F.; Orozco Quintero, F.J.; Gómez Mojica, J.C.; Carvajal Díaz, A.F. Digital orthophotography and LIDAR data to control and management of Tierra Bomba island littoral, Colombian Caribbean. Boletín Científico CIOH 2008, 26, 86–103. [Google Scholar] [CrossRef]
  52. Heidemann, H.K. Lidar base specification (ver. 1.3, February 2018). In U.S. Geological Survey Techniques and Methods; Geological Survey: Reston, Virginia, 2018; Chapter B4. [Google Scholar]
  53. Zhang, K.; Gann, D.; Ross, M.; Biswas, H.; Li, Y.; Rhome, J. Comparison of TanDEM-X DEM with LiDAR Data for Accuracy Assessment in a Coastal Urban Area. Remote Sens. 2019, 11, 876. [Google Scholar] [CrossRef]
  54. Escobar-Villanueva, J.; Nardini, A.; Iglesias-Martínez, L. Assessment of LiDAR topography in modeling urban flooding with MODCEL©. Applied to the coastal city of Riohacha, La Guajira (Colombian Caribbean). In Proceedings of the XVI Congreso de la Asociación Española de Teledetección, Sevilla, Spain, 21–23 October 2015; pp. 368–383. [Google Scholar]
  55. Granshaw, S.I. Photogrammetric Terminology: Third Edition. Photogramm. Rec. 2016, 31, 210–252. [Google Scholar] [CrossRef]
  56. Turner, D.; Lucieer, A.; Watson, C. An Automated Technique for Generating Georectified Mosaics. Remote Sens. 2012, 4, 1392–1410. [Google Scholar] [CrossRef]
  57. Agisoft LLC. Agisoft PhotoScan User Manual—Professional Edition; Agisoft LLC: St. Petersburg, Russia, 2016; Version 1.2. [Google Scholar]
  58. Pix4D SA Pix4Dmapper 4.1 USER MANUAL; Pix4D SA: Lausanne, Switzerland, 2017.
  59. Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  60. Triggs, B.; McLauchlan, P.F.; Hartley, R.I.; Fitzgibbon, A.W. Bundle Adjustment—A Modern Synthesis. In Vision Algorithms: Theory and Practice; Springer: Berlin/Heidelberg, Germany, 2000; pp. 298–372. [Google Scholar]
  61. Remondino, F.; Spera, M.G.; Nocerino, E.; Menna, F.; Nex, F. State of the art in high density image matching. Photogramm. Rec. 2014, 29, 144–166. [Google Scholar] [CrossRef] [Green Version]
  62. Agisoft LLC. Agisoft LLC Orthophoto and DEM Generation with Agisoft PhotoScan Pro 1.0.0; Agisoft LLC: St. Petersburg, Russia, 2013. [Google Scholar]
  63. Axelsson, P. DEM Generation from Laser Scanner Data Using adaptive TIN Models. Int. Arch. Photogramm. Remote Sens. 2000, 23, 110–117. [Google Scholar]
  64. Becker, C.; Häni, N.; Rosinskaya, E.; D’Angelo, E.; Strecha, C. Classification of Aerial Photogrammetric 3D Point Clouds. Photogramm. Eng. Remote Sens. 2018, 84, 287–295. [Google Scholar] [CrossRef]
  65. Planning Department—Municipality of Riohacha (Colombia). Rehabilitation of Sewerage Pipe Networks for the “Barrio Arriba” of the Municipality of Riohacha; Planning Department—Municipality of Riohacha (Colombia): Riohacha, Colombia, 2018. [Google Scholar]
  66. Alidoost, F.; Samadzadegan, F. Statistical Evaluation of Fitting Accuracy of Global and Local Digital Elevation Models in Iran. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, XL-1/W3, 19–24. [Google Scholar] [CrossRef]
  67. Zazo, S.; Rodríguez-Gonzálvez, P.; Molina, J.L.; González-Aguilera, D.; Agudelo-Ruiz, C.A.; Hernández-López, D. Flood hazard assessment supported by reduced cost aerial precision photogrammetry. Remote Sens. 2018, 10, 1566. [Google Scholar] [CrossRef]
  68. Ruzgiene, B.; Berteška, T.; Gečyte, S.; Jakubauskiene, E.; Aksamitauskas, V.Č. The surface modelling based on UAV Photogrammetry and qualitative estimation. Meas. J. Int. Meas. Confed. 2015, 73, 619–627. [Google Scholar] [CrossRef]
  69. Teknomo, K. Similarity Measurement. Available online: http://people.revoledu.com/kardi/tutorial/Similarity/BrayCurtisDistance.html (accessed on 24 June 2018).
  70. Nartiss, M. r.Lake.xy Module. Available online: https://grass.osgeo.org/grass74/manuals/r.lake.html (accessed on 9 July 2018).
  71. Miguez, M.G.; Battemarco, B.P.; De Sousa, M.M.; Rezende, O.M.; Veról, A.P.; Gusmaroli, G. Urban flood simulation using MODCEL-an alternative quasi-2D conceptual model. Water 2017, 9, 445. [Google Scholar] [CrossRef]
  72. Hodgson, M.; Bresnahan, P. Accuracy of Airborne LIDAR Derived Elevation: Empirical Assessment and Error Budget. Photogramm. Eng. Remote Sens. 2004, 70, 331. [Google Scholar] [CrossRef]
  73. Huang, R.; Zheng, S.; Hu, K.; Huang, R.; Zheng, S.; Hu, K. Registration of Aerial Optical Images with LiDAR Data Using the Closest Point Principle and Collinearity Equations. Sensors 2018, 18, 1770. [Google Scholar] [CrossRef]
  74. Zhang, J.; Lin, X. Advances in fusion of optical imagery and LiDAR point cloud applied to photogrammetry and remote sensing. Int. J. Image Data Fusion 2017, 8, 1–31. [Google Scholar] [CrossRef]
  75. Giordan, D.; Hayakawa, Y.; Nex, F.; Remondino, F.; Tarolli, P. Review article: The use of remotely piloted aircraft systems (RPASs) for natural hazards monitoring and management. Nat. Hazards Earth Syst. Sci. 2018, 4, 1079–1096. [Google Scholar] [CrossRef]
  76. Yurtseven, H. Comparison of GNSS-, TLS- and Different Altitude UAV-Generated Datasets on The Basis of Spatial Differences. ISPRS Int. J. Geo-Inf. 2019, 8, 175. [Google Scholar] [CrossRef]
  77. Park, J.; Kim, P.; Cho, Y.K.; Kang, J. Framework for automated registration of UAV and UGV point clouds using local features in images. Autom. Constr. 2019, 98, 175–182. [Google Scholar] [CrossRef]
  78. Shaad, K.; Ninsalam, Y.; Padawangi, R.; Burlando, P. Towards high resolution and cost-effective terrain mapping for urban hydrodynamic modelling in densely settled river-corridors. Sustain. Cities Soc. 2016, 20, 168–179. [Google Scholar] [CrossRef]
  79. Šiljeg, A.; Barada, M.; Marić, I.; Roland, V. The effect of user-defined parameters on DTM accuracy—development of a hybrid model. Appl. Geomat. 2019, 11, 81–96. [Google Scholar] [CrossRef]
  80. Jeunnette, M.N.; Hart, D.P. Remote sensing for developing world agriculture: Opportunities and areas for technical development. In Proceedings of the Remote Sensing for Agriculture, Ecosystems, and Hydrology XVIII, Edinburgh, UK, 26–29 September 2016; Volume 9998, pp. 26–29. [Google Scholar]
  81. SESAR Providing Operations of Drones with Initial Unmanned Aircraft System Traffic Management (PODIUM). Available online: https://vimeo.com/259880175 (accessed on 31 May 2019).
  82. Wild, G.; Murray, J.; Baxter, G.; Wild, G.; Murray, J.; Baxter, G. Exploring Civil Drone Accidents and Incidents to Help Prevent Potential Air Disasters. Aerospace 2016, 3, 22. [Google Scholar] [CrossRef]
  83. Altawy, R.; Youssef, A.M. Security, Privacy, and Safety Aspects of Civilian Drones. ACM Trans. Cyber-Phys. Syst. 2016, 1, 1–25. [Google Scholar] [CrossRef]
  84. Brunier, G.; Fleury, J.; Anthony, E.J.; Gardel, A.; Dussouillez, P. Close-range airborne Structure-from-Motion Photogrammetry for high-resolution beach morphometric surveys: Examples from an embayed rotating beach. Geomorphology 2016, 261, 76–88. [Google Scholar] [CrossRef]
  85. Hugenholtz, C.H.; Whitehead, K.; Brown, O.W.; Barchyn, T.E.; Moorman, B.J.; LeClair, A.; Riddell, K.; Hamilton, T. Geomorphological mapping with a small unmanned aircraft system (sUAS): Feature detection and accuracy assessment of a photogrammetrically-derived digital terrain model. Geomorphology 2013, 194, 16–24. [Google Scholar] [CrossRef] [Green Version]
  86. Benassi, F.; Dall’Asta, E.; Diotri, F.; Forlani, G.; Morra di Cella, U.; Roncella, R.; Santise, M. Testing accuracy and repeatability of UAV blocks oriented with GNSS-supported aerial triangulation. Remote Sens. 2017, 9, 172. [Google Scholar] [CrossRef]
  87. Gindraux, S.; Boesch, R.; Farinotti, D. Accuracy assessment of digital surface models from Unmanned Aerial Vehicles’ imagery on glaciers. Remote Sens. 2017, 9, 186. [Google Scholar] [CrossRef]
  88. Immerzeel, W.W.; Kraaijenbrink, P.D.A.; Shea, J.M.; Shrestha, A.B.; Pellicciotti, F.; Bierkens, M.F.P.; De Jong, S.M. High-resolution monitoring of Himalayan glacier dynamics using unmanned aerial vehicles. Remote Sens. Environ. 2014, 150, 93–103. [Google Scholar] [CrossRef]
  89. Yilmaz, V.; Konakoglu, B.; Serifoglu, C.; Gungor, O.; Gökalp, E. Image classification-based ground filtering of point clouds extracted from UAV-based aerial photos. Geocarto Int. 2018, 33, 310–320. [Google Scholar] [CrossRef]
  90. Gbenga Ajayi, O.; Palmer, M.; Salubi, A.A. Modelling farmland topography for suitable site selection of dam construction using unmanned aerial vehicle (UAV) photogrammetry. Remote Sens. Appl. Soc. Environ. 2018, 11, 220–230. [Google Scholar]
Figure 1. Map of the selected study area and location of control points for DEM generation. Hydrologic sub-basin (red dashed polygons) and numbers (bold type) were delineated and coded from local flood simulations performed by Nardini and Miguez [44].
Figure 1. Map of the selected study area and location of control points for DEM generation. Hydrologic sub-basin (red dashed polygons) and numbers (bold type) were delineated and coded from local flood simulations performed by Nardini and Miguez [44].
Sensors 19 03205 g001
Figure 2. General outline of the methodology.
Figure 2. General outline of the methodology.
Sensors 19 03205 g002
Figure 3. General methodology for ground control point (GCP) and LiDAR-derived control point (LCP) determination (method applied); (a) Leica TCR 403 total station used for GCP; (b) LiDAR DEM altimetric reference (displayed in shaded relief) with LCPs placed on study area.
Figure 3. General methodology for ground control point (GCP) and LiDAR-derived control point (LCP) determination (method applied); (a) Leica TCR 403 total station used for GCP; (b) LiDAR DEM altimetric reference (displayed in shaded relief) with LCPs placed on study area.
Sensors 19 03205 g003
Figure 4. Checkpoint locations for DEM accuracy assessment.
Figure 4. Checkpoint locations for DEM accuracy assessment.
Sensors 19 03205 g004
Figure 5. Box plots for comparison of ΔZ error for each model (No. of checkpoints = 104).
Figure 5. Box plots for comparison of ΔZ error for each model (No. of checkpoints = 104).
Sensors 19 03205 g005
Figure 6. Histogram of the ΔZ values (n = 587) for each model. Superimposed on the histogram are the expected normal distribution curves with mean and RMSE estimated from all the data (red). The Shapiro-Wilk test results are shown (if p-value ≥ 0.05, ΔZ are normally distributed).
Figure 6. Histogram of the ΔZ values (n = 587) for each model. Superimposed on the histogram are the expected normal distribution curves with mean and RMSE estimated from all the data (red). The Shapiro-Wilk test results are shown (if p-value ≥ 0.05, ΔZ are normally distributed).
Sensors 19 03205 g006
Figure 7. Accuracy (absolute and relative) of UAV-derived models, and comparison with LiDAR (no. of checkpoints = 104). Relative accuracy ratios are shown in brackets. USGS/ASPRS accuracy standards (left), as well as expected accuracy (right) are shown as horizontal dashed lines.
Figure 7. Accuracy (absolute and relative) of UAV-derived models, and comparison with LiDAR (no. of checkpoints = 104). Relative accuracy ratios are shown in brackets. USGS/ASPRS accuracy standards (left), as well as expected accuracy (right) are shown as horizontal dashed lines.
Sensors 19 03205 g007
Figure 8. Flood extent to the corresponding DEM in 704 and 603 sub-basins.
Figure 8. Flood extent to the corresponding DEM in 704 and 603 sub-basins.
Sensors 19 03205 g008
Figure 9. This is a figure, Similarity of flood progression curves compared to LiDAR at 704 (a) and 603 (b) sub-basins. Bray-Curtis index is shown in brackets. Volume is given in 1k cubic meters (m3).
Figure 9. This is a figure, Similarity of flood progression curves compared to LiDAR at 704 (a) and 603 (b) sub-basins. Bray-Curtis index is shown in brackets. Volume is given in 1k cubic meters (m3).
Sensors 19 03205 g009
Table 1. Summary of technical parameters for UAV-based imaging surveys on the study area.
Table 1. Summary of technical parameters for UAV-based imaging surveys on the study area.
ParameterResult
Imagery acquisition date:17/02/2016 (6 to 10 am); cloudy day and low wind velocity [46].
Flight plan area5.37 km2 (537 ha). Flight 1: 0.9 km2; flight 2: 2.2 km2 and flight 3: 2.2 km2 (see Figure 1):
Ground sample distance, GSD~ 10.3 cm/ pixel (single image footprint on the ground ~473 m × 355 m)
Flight height:~ 325 m AGL (height above ground level). Reported by on-board GPS flight log
Overlap/ grid pattern/ strips80% (longitudinal)/ simple grid [24]/ 19 overlapping strips captured at nadir angle
Number of flights/images:3 flights, one of 8 min and two of 20 min each, approximately 467 images acquired
Table 2. Settings for the SfM-MVS processing chain of UAV-based image with the applied LCPs.
Table 2. Settings for the SfM-MVS processing chain of UAV-based image with the applied LCPs.
PhotoScan (semi-automatic)Pix4Dmapper (automatic)
ParameterSelected/ Value 1,2,3ParameterSelected/ Value 1,2,3
Align photos Initial processing
Accuracy“High”Keypoint image scale“Full”
Pair selection“Reference”Matching image pairs“Aerial Grid or Corridor”
Reprojection error (pix) 11.97Reprojection error (pix) 10.19
Control point accuracy (pix) 20.16Control point accuracy (pix) 20.65
Camera optimization (%) 31.90Camera optimization (%) 30.21
Sparse cloud (point/m2)0.04Sparse cloud (point/m2)0.12
Build dense Cloud Point cloud densification
Quality“Medium”Point density“Optimal”
Deep filtering“Mild”Min. number of matched“3”
Dense cloud (point/m2)6.4Dense cloud (point/m2)6.6
Classifying dense cloud points“Ground points”Point cloud classification“Classify Point Cloud”
Cell size (m)“40”--
Max distance (m)“0.3”--
Max angle (deg)“5”--
Ground cloud (points/m2)2.09Ground cloud (points/m2)3.51
Build mesh --
Source type“Height field (2.5D)”--
Point classes --
[Surface mesh]“Created (Never classified)”--
[Terrain mesh]“Ground”--
Build DEM ← [from surface mesh]Raster DSM
Source data“mesh”Method“Inv. Dist. Weighting”
Interpolation“Enable (default)”DSM filters[all checked]
Resolution[default value]Resolution“Automatic” [1 × GSD]
40.84 × 40.84 cm 10.35 × 10.35 cm
Build DEM ← [from terrain mesh]Additional outputs
Source data“mesh”Raster DTM 4[checked]
Interpolation“Enable (default)”
Resolution[default value]Resolution“Automatic” [5 × GSD]
40.83 × 40.83 cm 51.73 × 51.73 cm
1 Quality indicator used as a basis for 3D point reconstruction during bundle block adjustment; values ≤ 1 pix are better. 2 Quality indicator (pix) of control point manually tagged on imagery (mean value for 13 LCPs); pix values ≤ 1 are better (the error is less than the average GSD. 3 Relative difference (%) between initial and optimized internal camera parameter values (focal distance, pixel), lower values are better. 4 DTM (digital terrain model).
Table 3. H (m) input used for a flood event estimation for each DEM sub-basin.
Table 3. H (m) input used for a flood event estimation for each DEM sub-basin.
Sub-BasinComment 1LiDAR HPhotoScan HPix4D H
704Constituted entirely by urban cover (0.32 km2); h = 1.221.481.351.80
705Also constituted by urban cover (0.19 km2); h = 0.291.320.791.28
703Adjoining outlet of Sub-basin 603 (0.02 km2); h = 1.781.871.742.02
603Pond, wetland and urban cover (0.29 km2); h = 1.631.852.011.63
506Adjoining inlet of Sub-basin 603 (0.1 km2); h = 1.123.153.103.78
1 Details of simulation flood depth h (m) and sub-basin delineation and its code numbers are explained in Nardini and Miguez [44].
Table 4. Accuracy measures of LiDAR based and UAV-derived models (checkpoints = 104).
Table 4. Accuracy measures of LiDAR based and UAV-derived models (checkpoints = 104).
Accuracy Estimators by Assumption of the ΔZ Distribution:DSMDEM
LiDARPhotoScanPix4DLiDARPhotoScanPix4D
Normal
Mean (m)0.0100.165−0.080−0.089−0.073−0.093
Standard deviation, SD (m)0.3730.4330.2840.1660.2190.262
RMSE 1 (m)0.3710.4620.2940.1870.2300.277
RMSE:GSD 2 ratio-4.52.9-2.22.7
Non-normal (robust method)
Median (m)0.1250.1720.1420.1230.1310.135
NMAD 3 (m)0.1850.2550.2110.1810.1850.200
NMAD:GSD 2 ratio -2.52.0-1.81.9
1 Root mean square error. 2 GSD (ground sample distance: 10.3 cm/ pix). 3 Normalized median absolute deviation.
Table 5. Comparisons of the estimations for a local flood event considered by each DEM.
Table 5. Comparisons of the estimations for a local flood event considered by each DEM.
Sub-BasinVolume, Area
V (m3), A (m2)
LiDARPix4DPhotoScanPix4DPhotoScan
Difference (%) 2
704 (Urban)V68,07261,72853,6999.321.1
A166,385114,940141,98530.914.7
705 (Urban)V4,06518512695.496.9
A16,68294175894.495.5
703 (Outlet of 603)V13,9864,2947,63869.345.4
A15,6465,75611,79363.224.6
603 (Pond + urban)V114,31151,103106,13555.37.2
A155,24521,015146,59686.55.6
506 (Inlet of 603)V2,5895,6907,180119.8177.3
A7,87811,20614,51442.284.2
Difference (absolute)
Total (Σ) 1V203,023123,000174,77880,02328,245
A361,836153,858315,646207,97846,190
Overall
difference (%) 2
V 42.518.4
A 59.316.4
1 Σ = Total V and A, corresponding to the sum of the flooded sub-basins. 2 Computed with Ecs. 2 and 3.
Table 6. Bray-Curtis similarity index of DEM flood estimations with respect to LiDAR reference.
Table 6. Bray-Curtis similarity index of DEM flood estimations with respect to LiDAR reference.
Sub-BasinPix4DPhotoScanPix4DPhotoScan
Volume 1Area 1
704 (urban)0.090.140.160.12
705 (urban)0.850.910.790.85
703 (outlet of 603)0.470.270.510.19
603 (pond + urban)0.340.060.690.07
506 (inlet of 603)0.460.540.330.44
Overall average score0.440.380.500.33
1 Perfect similarity score is 0, whereas 1 is absolutely dissimilarity.

Share and Cite

MDPI and ACS Style

Escobar Villanueva, J.R.; Iglesias Martínez, L.; Pérez Montiel, J.I. DEM Generation from Fixed-Wing UAV Imaging and LiDAR-Derived Ground Control Points for Flood Estimations. Sensors 2019, 19, 3205. https://doi.org/10.3390/s19143205

AMA Style

Escobar Villanueva JR, Iglesias Martínez L, Pérez Montiel JI. DEM Generation from Fixed-Wing UAV Imaging and LiDAR-Derived Ground Control Points for Flood Estimations. Sensors. 2019; 19(14):3205. https://doi.org/10.3390/s19143205

Chicago/Turabian Style

Escobar Villanueva, Jairo R., Luis Iglesias Martínez, and Jhonny I. Pérez Montiel. 2019. "DEM Generation from Fixed-Wing UAV Imaging and LiDAR-Derived Ground Control Points for Flood Estimations" Sensors 19, no. 14: 3205. https://doi.org/10.3390/s19143205

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop