**Structure from Motion of Multi-Angle RPAS Imagery Complements Larger-Scale Airborne Lidar Data for Cost-E**ff**ective Snow Monitoring in Mountain Forests**

#### **Patrick D. Broxton 1,\* and Willem J. D. van Leeuwen 1,2**


Received: 25 May 2020; Accepted: 16 July 2020; Published: 18 July 2020

**Abstract:** Snowmelt from mountain forests is critically important for water resources and hydropower generation. More than 75% of surface water supply originates as snowmelt in mountainous regions, such as the western U.S. Remote sensing has the potential to measure snowpack in these areas accurately. In this research, we combine light detection and ranging (lidar) from crewed aircraft (currently, the most reliable way of measuring snow depth in mountain forests) and structure from motion (SfM) remotely piloted aircraft systems (RPAS) for cost-effective multi-temporal monitoring of snowpack in mountain forests. In sparsely forested areas, both technologies give similar snow depth maps, with a comparable agreement with ground-based snow depth observations (RMSE ~10 cm). In densely forested areas, airborne lidar is better able to represent snow depth than RPAS-SfM (RMSE ~10 cm vs ~10–20 cm). In addition, we find the relationship between RPAS-SfM and previous lidar snow depth data can be used to estimate snow depth conditions outside of relatively small RPAS-SfM monitoring plots, with RMSE's between these observed and estimated snow depths on the order of 10–15 cm for the larger lidar coverages. This suggests that when a single airborne lidar snow survey exists, RPAS-SfM may provide useful multi-temporal snow monitoring that can estimate basin-scale snowpack, at a much lower cost than multiple airborne lidar surveys. Doing so requires a pre-existing mid-winter or peak-snowpack airborne lidar snow survey, and subsequent well-designed paired SfM and field snow surveys that accurately capture substantial snow depth variability.

**Keywords:** snow; remotely piloted aircraft systems; structure from motion; lidar; forests

#### **1. Introduction**

Snowpack in mountain forests is a major source of water for reservoirs that provide water and hydropower for many urban and agricultural communities [1–3]. Mountain snowpacks are affected by many climatic, topographic and ecological variables, and are sensitive to forest disturbance such as thinning, prescribed fires, wildfire, and tree die-off [4–13]. It is important to monitor how snowpacks in these areas respond to changing environmental conditions in order to understand and forecast available water resources for both natural and human consumption.

Characterization of snowpack in these mountain forests is challenging because of the large amount of small-scale spatial variability of the snowpack, due to topographic heterogeneity and variable forest structure [14–18]. This extreme heterogeneity makes it difficult to monitor snowpack with point snow measurements [19]. Remote sensing of snowpack is better able to characterize this heterogeneity, though it suffers in forested environments because trees can interfere with the remote sensing of snowpack below the canopy [20]. Furthermore, remote sensing of snowpack is hindered by clouds and can be affected by sub-grid landscape heterogeneity [21,22]. Therefore, monitoring and assessing

the snowpack response to environmental changes requires multi-scale and multi-sensor monitoring that takes advantage of field data in combination with multiple observing platforms (e.g., remotely piloted aircraft systems (RPAS), airplanes and satellites) and payloads (e.g., visible, multispectral and lidar sensors) [21,23,24]. These snow monitoring platforms and payloads all have advantages and disadvantages that relate to spatial and spectral resolutions, geolocation, extent, cloud cover, wind, reliability, timeliness, terrain accessibility, resources and training, and government regulations [25,26].

Of the remote sensing techniques for measuring snowpacks in forested environments, lidar from crewed aircraft (hereafter, referred to as 'airborne lidar', or simply 'lidar') is one of the most successful because it covers relatively large areas, has very high sampling rates, and has the ability to penetrate the canopy through canopy gaps [27–30]. However, there are many costs associated with these lidar acquisitions, and they can be time-consuming to process [31,32], making it hard to use them for multi-temporal snow monitoring. While some groups, such as NASA's Airborne Snow Observatory [29] have deployed multitemporal airborne lidar surveys, such acquisitions are costly and are not available for most sites.

Three-dimensional modelling created from multi-angle imagery from RPAS, such as quadcopter platforms with gimballed digital cameras, offers an exciting addition to airborne lidar for snow monitoring in heterogeneous forests [33,34]. This imagery, which is collected at multiple points along the RPAS's flight trajectory, is stitched together to create a 3-D representation of the landscape using a process called structure from motion (SfM; which uses points on multiple overlapping images to triangulate and bundle-adjust the locations of image pixels [35]). This process is able to achieve good accuracy for snow depth and forest structure assessments [34,36–39]. In certain respects, SfM from RPAS imagery (hereafter, referred to as 'RPAS-SfM' or simply 'SfM') offers advantages over airborne lidar. For example, RPAS have become relatively inexpensive and easy to operate, allowing them to be economically deployed with minimal labor [40]. However, there are still challenges associated with RPAS monitoring of snow depth [37,41]. These include variable field site conditions (which might include a dense canopy that obscures the snowpack), survey design, SFM processing software (which is computationally expensive), and validation data. In addition, RPAS-SfM is generally limited to smaller areas, due to physical limitations to RPAS flight extent (such as battery life, which limits flight times), and there are sometimes logistical challenges with RPAS flights such as obtaining permission before flights.

Due to their strengths for monitoring snowpack, both airborne lidar and RPAS-SfM technologies have been extensively used for snowpack monitoring (e.g., References [20,27,29,30,34,36,37,40,42–50]). However, these studies do not include a strategy for combining smaller-area RPAS-SfM measurements with larger-area airborne lidar measurements for multitemporal snowpack monitoring, especially at basin-scales. This combination is natural because the strengths of each technology are complementary to the other. For example, airborne lidar provides consistent accuracy and larger spatial extents, which are more appropriate for hydrological applications, such as water supply monitoring and Land Surface Model evaluation. However, the cost (in terms of both money and processing time) can be quite high. Meanwhile, RPAS-SfM allows for increased temporal frequency and on-demand data acquisitions with rapid processing times at a much lower monetary cost over limited spatial domains.

In this research, we show how RPAS-SfM and airborne lidar can be used in a complementary fashion to attain regular snowpack monitoring data that can lead to seasonal large area snow depth and snow water equivalent (SWE) fields in complex forested environments. We investigate (1) how well SfM is able to characterize snowpack for a variety of forest cover and topographic conditions using a rich dataset of field-based and airborne lidar measurements; (2) how can multi-temporal SfM data be combined with existing airborne lidar snow data to achieve multi-temporal estimates of snow amount and distribution outside of spatially-limited SfM plots; and (3), what is the accuracy of these estimates at different times of the year and under different forest and terrain conditions. We use extensive field data collected from 11 surveys from 2017–2020 for four study plots along Arizona's Mogollon Rim, representing the diverse forest and snowpack conditions in the region. All surveys have multi-angle

aerial photography acquisitions from a RPAS paired with precisely geolocated ground-based snow surveys, while seven of these surveys are coincident with acquisitions of high point density airborne lidar. The combination of the RPAS surveys with the airborne lidar and ground surveys provides an unprecedented opportunity not only to evaluate the performance of the SfM remote sensing under a variety of forest and snowpack conditions, but also to explore opportunities to combine all of these measurements for cost-effective, high quality multi-temporal snowpack monitoring in mountain forests.

#### **2. Materials and Methods**

#### *2.1. Field Data*

Data in this study come from four intensively studied snow-research plots near the Mogollon Rim, a mid- to high-elevation (~2000–3000 m) forested region in central Arizona (AZ; Figure 1). Two "montane" plots, located ~50 km east of Show Low, AZ, are in close proximity to each other at a high-elevation (~2800 m) field site. These plots have low to moderate relief slopes and contain a mixture of burned (from a 2011 wildfire [51]) and live mixed conifer forests and montane grasslands. One plot (the "montane meadow" plot) contains large snow variability, due to heterogeneous forest conditions (i.e., tree stands separated by meadows). In contrast, the other (the "montane valley" plot) contains large snow variability due mostly to heterogeneous topography (i.e., north- vs. south-facing slopes). A "dense forest" plot, located ~20 km north of Payson, AZ, has gentle slopes and is densely forested, primarily with Ponderosa Pine (pinus ponderosa) trees. Finally, ~10 km to the north of the dense forest plot, is another plot (called the "thinning comparison" plot) that is located along with the transition between recently thinned and unthinned ponderosa pine forest patches near Clints Well, AZ. The dense forest and thinning comparison plots have shallower snowpack than the montane plots. These four plots span a range of forest densities, tree heights, and topographic conditions (Table 1), allowing us to evaluate the performance of snow measurement with SfM and airborne lidar under a variety of forest conditions.


**Table 1.** Topographic, canopy cover/height and sampling information for each SfM survey.

<sup>1</sup> Area refers to the size of the SfM-generated snow maps. <sup>2</sup> Slope, canopy cover, and canopy height are computed from 2019 airborne lidar data [52]; <sup>3</sup> Sample locations refers to the number of individual locations where snow depth is sampled.

*Remote Sens.* **2020**, *12*, 2311 4 of 24

**Figure 1.** Locations of the four study plots (**a** – overview, **b** – montane meadow and montane valley plots, **c** – dense forest plot, **d** – thinning comparison plot). Canopy cover data are derived from 2019 airborne lidar data [52].At these field sites, we collected rich snowpack datasets, including a variety of airborne and ground-measured snowpack data that are ideal for creating and evaluating highquality SfM models. Snow surveys for these plots consisted of multi-angle RPAS imagery, airborne lidar, and ground measurements of snow depth and SWE. In total, there were 11 snow surveys (Table 1), with the most surveys occurring at the montane plots (as this area is important for snowpack monitoring for local water supply managers), with fewer surveys occurring at the dense forest and thinning comparison plots. All surveys included RPAS flights to acquire aerial imagery (using a DJI Phantom 3 Pro drone with a 12 megapixel camera—for 2017 surveys—and a DJI Phantom 4 Pro drone with a 20 megapixel camera—for 2018–2020 surveys) to generate snow-on 3-D models of the study plots using SfM (see Section 3). All RPAS flights occurred on clear, sunny days between 10:00 AM and 2:00 PM to minimize the length of shadows. For the flights in 2017, the RPAS was controlled manually with a combination of nadir and oblique (~30% nadir) imagery along back and forth flight lines spanning each domain. For the flights in 2018-2020, flight planning software called Altizure® was used to collect the imagery in a regular flight pattern (using back and forth flight lines) with a set of nadir imagery, as well as two sets of oblique images (with 75% overlap for all imagery). All flights occurred at ~75 m above ground level with horizontal velocities of ~5–7 m/s resulting in ~4 cm and **Figure 1.** Locations of the four study plots (**a**—overview, **b**—montane meadow and montane valley plots, **c**—dense forest plot, **d**—thinning comparison plot). Canopy cover data are derived from 2019 airborne lidar data [52]. At these field sites, we collected rich snowpack datasets, including a variety of airborne and ground-measured snowpack data that are ideal for creating and evaluating high-quality SfM models. Snow surveys for these plots consisted of multi-angle RPAS imagery, airborne lidar, and ground measurements of snow depth and SWE. In total, there were 11 snow surveys (Table 1), with the most surveys occurring at the montane plots (as this area is important for snowpack monitoring for local water supply managers), with fewer surveys occurring at the dense forest and thinning comparison plots. All surveys included RPAS flights to acquire aerial imagery (using a DJI Phantom 3 Pro drone with a 12 megapixel camera—for 2017 surveys—and a DJI Phantom 4 Pro drone with a 20 megapixel camera—for 2018–2020 surveys) to generate snow-on 3-D models of the study plots using SfM (see Section 3). All RPAS flights occurred on clear, sunny days between 10:00 AM and 2:00 PM to minimize the length of shadows. For the flights in 2017, the RPAS was controlled manually with a combination of nadir and oblique (~30% nadir) imagery along back and forth flight lines spanning each domain. For the flights in 2018-2020, flight planning software called Altizure® was used to collect the imagery in a regular flight pattern (using back and forth flight lines) with a set of nadir imagery, as well as two sets of oblique images (with 75% overlap for all imagery). All flights occurred at ~75 m above ground level with horizontal velocities of ~5–7 m/s resulting in ~4 cm and ~2.1 cm spatial resolution on the ground for the Phantom 3 and Phantom 4 systems, respectively. The cameras used automatic exposure settings.

These surveys also included ground measurements of snow depth and snow density. A majority of these measurements were collected by more than 25 employees and volunteers from five public and private partner entities (see Acknowledgements) along carefully pre-marked snow survey transects with precisely geolocated (using a differential GPS) endpoints to enable the precise geolocation of snow measurements in order to relate them to the airborne data. Following the sampling methodology described in Reference [53], five snow depth samples were taken every 5 m along the transects in a star pattern to ensure spatial representativeness of the snow depth samples, and snow density was sampled every 10–20 m along the transects. There were also some additional measurements along transects (also positioned with a differential GPS) where snow depth is monitored using remote cameras for temporal snowpack monitoring (however, this study only uses the snow depth data, as well as manual measurements of snow density along these transects for the survey dates in Table 1). Not all dates had identical ground surveys for each study plot, depending on logistical considerations (e.g., available workers). Overall, individual surveys included ground measurements from 22–108 individual locations (Table 1). Note that this translates to >3000 measurements as there were five snow depth measurements at each manual sample location. Further details about the snow survey methodology can be found in Reference [54].

#### *2.2. Generation of Lidar Snow Depth, Density, and SWE Maps*

For seven of the surveys, we generated state-of-the-art airborne lidar-based maps of snow depth, snow density, and SWE, as they included airborne lidar data. These maps were created for the two ~100 km<sup>2</sup> lidar footprint areas (which each have three separate lidar acquisitions), that encompass the study plots. The two montane plots are within a "high-elevation" lidar footprint, and the dense forest and thinning comparison plots are within a "mid-elevation" lidar footprint. These maps were created using high point density (~10–15 points/m<sup>2</sup> ) snow-on airborne lidar data (which are differenced from similar snow-off lidar data) and are bias-corrected with the field-measured snow depths. All lidar data was flown by Quantum Spatial Inc. [52,55–58].

The snow depth maps were then combined with snow density maps generated using artificial neural network (ANN) machine learning of the field-measured snow density measurements, using various lidar-derived physiographic attributes as predictors. The methodology for creating these maps (and their validation) is detailed in Reference [54], but is briefly summarized here. We use an ensemble of ANNs with relatively simple network structure (1 hidden layer with ten neurons) with Levenberg-Marquardt (L-M) backpropagation (Marquardt, 1963) to model the field snow density measurements using lidar-derived metrics as predictor variables. The predictor variables include physiographic variables (elevation, slope, northness—or sin(slope) × sin(aspect), canopy height, and canopy closure), other GIS-derived quantities using the lidar data (skyview factor, below canopy solar forcing index—or the ratio of incident solar radiation reaching the ground surface over a given period to that hitting a flat surface with no obstructions), and lidar snow depth (from 1 February 2017). These snow density maps are multiplied by the lidar snow depth maps to generate maps of SWE.

#### *2.3. Generation of SfM Snow Depth Maps*

Snow-on point clouds were generated from the RPAS imagery using Agisoft Metashape software. After the point clouds were created, the ground points were isolated by using a combination of Metashape's built-in ground filtering algorithm (for a first cut separation of the point cloud) and a Cloth Simulation Filter algorithm [59] implemented in R (to remove remaining debris from the ground point cloud).

Next, we used an automated post-processing workflow to accurately georeference the point clouds using available snow-off point cloud data. In this study, we use snow-off lidar point cloud data (from summer 2014) for this purpose (though we also tried, with similar results, applying the workflow using snow-off SfM data that we collected—see Section 3.1 below). This workflow consists of two general steps. First, the snow-on SfM point clouds were co-registered with the snow-off point

another.

cloud canopy layer by removing any vertical offsets between the snow-on and snow-off point clouds and then using an Iterative Closest Point algorithm implemented in CloudCompare software to match the canopy layers. This successfully adjusted the snow-on point clouds to match the canopy elements in the snow-off point clouds, even when there was sometimes substantial vegetation change between the snow-on and snow-off data (Figure 2). Next, warping and tilting of the point clouds were removed using low-order polynomial filters (a 1st order polynomial correction to correct tilt followed by a 2nd order polynomial correction to correct warping) by comparing them to the lidar ground surface plus a first guess of snow depth, based on the ground survey data. This first guess was generated using the same ANN methodology as the creation of snow density maps from field snow density measurements in Reference [54] and described in Section 2.2, except that field measured snow depths, rather than field measured snow densities, are used as the predicted variable. of two general steps. First, the snow-on SfM point clouds were co-registered with the snow-off point cloud canopy layer by removing any vertical offsets between the snow-on and snow-off point clouds and then using an Iterative Closest Point algorithm implemented in CloudCompare software to match the canopy layers. This successfully adjusted the snow-on point clouds to match the canopy elements in the snow-off point clouds, even when there was sometimes substantial vegetation change between the snow-on and snow-off data (Figure 2). Next, warping and tilting of the point clouds were removed using low-order polynomial filters (a 1st order polynomial correction to correct tilt followed by a 2nd order polynomial correction to correct warping) by comparing them to the lidar ground surface plus a first guess of snow depth, based on the ground survey data. This first guess was generated using the same ANN methodology as the creation of snow density maps from field snow density measurements in Reference [54] and described in Section 2.2, except that field measured snow depths, rather than field measured snow densities, are used as the predicted variable.

*Remote Sens.* **2020**, *12*, 2311 6 of 24

workflow using snow-off SfM data that we collected—see Section 3.1 below). This workflow consists

**Figure 2.** Example showing the snow-on SfM point cloud in the center of the montane meadow plot (mostly white, green, and brown colored dots represent snow, evergreen canopy, and woody material) in relation to the lidar point cloud (red dots) for an area in the montane meadow plot. Note that in some areas, there is a difference between the lidar (from summer 2014) and SfM (from winter 2017) point clouds, due to post-wildfire effects (lost branches, fallen snags and trees), but for the snags and trees that did not change during this period, the SFM and lidar point clouds are very close to one **Figure 2.** Example showing the snow-on SfM point cloud in the center of the montane meadow plot (mostly white, green, and brown colored dots represent snow, evergreen canopy, and woody material) in relation to the lidar point cloud (red dots) for an area in the montane meadow plot. Note that in some areas, there is a difference between the lidar (from summer 2014) and SfM (from winter 2017) point clouds, due to post-wildfire effects (lost branches, fallen snags and trees), but for the snags and trees that did not change during this period, the SFM and lidar point clouds are very close to one another.

This processing workflow was automated using CloudCompare's command line mode (for the ICP algorithm), as well as the US Forest Service's FUSION/LDV [60] software and a python script to model and correct warping and tilting in the snow-on SfM point cloud. The ANN machine learning This processing workflow was automated using CloudCompare's command line mode (for the ICP algorithm), as well as the US Forest Service's FUSION/LDV [60] software and a python script tomodel and correct warping and tilting in the snow-on SfM point cloud. The ANN machine learningmethodology is implemented in Matlab® software, as described in Reference [54].

#### methodology is implemented in Matlab® software, as described in Reference [54]. *2.4. Using SfM Snowpack Data to Supplement Lidar Snowpack Data*

*2.4. Using SfM Snowpack Data to Supplement Lidar Snowpack Data*  We used these SfM snow depth maps to advance our understanding of snow distributions for our research sites in a number of ways. First, following the methodology in Reference [54], we combined them with maps of snow density generated using ANN modeling of snow density measurements to generate maps of SWE for our study plots. This enabled us to characterize differences in terms of snow depth, snow density, and SWE for different seasons. This is important because while the directly measured variable using lidar and SfM is snow depth, SWE is the snowpack variable that is of most interest in hydrological applications [61]. Although the distribution We used these SfM snow depth maps to advance our understanding of snow distributions for our research sites in a number of ways. First, following the methodology in Reference [54], we combined them with maps of snow density generated using ANN modeling of snow density measurements to generate maps of SWE for our study plots. This enabled us to characterize differences in terms of snow depth, snow density, and SWE for different seasons. This is important because while the directly measured variable using lidar and SfM is snow depth, SWE is the snowpack variable that is of most interest in hydrological applications [61]. Although the distribution of SWE is broadly similar to that of snow depth, it is important to consider the effects of variable snow densities, which can vary substantially in both space and time [54,62].

Next, we investigated the relationships between snow depths for the larger mid- and high-elevation lidar footprints on different dates to see how well they could be used to extrapolate snow depths measured for the SfM study plots to the larger lidar footprints. In particular, we compared the relationships between snow depths from the mid-winter (on 1 February 2017) and late winter (on 7 March 2017 and 4 March 2019) lidar snow surveys with those from the mid-winter lidar snow survey and the late-winter SfM data. To ensure that only the best quality SfM data was used to construct these relationships, we only considered SfM data for open areas without an overhead canopy for these comparisons. Of particular interest was (1) the strength of the relationships between the lidar data on multiple dates (which indicated how well they could be used to predict snow depths for different times), and (2) how well the SfM-data over the small study plots could capture those relationships. Even though the late winter surveys occurred at the same time of year, the 7 March 2017 survey reflected more evolved snowpack conditions: For the mid-elevation lidar coverage, it reflected mid-ablation to nearly snow-free conditions (depending on elevation), and for the high-elevation lidar coverage, it reflected peak-SWE to mid-ablation conditions, depending on elevation. On the other hand, the 4 March 2019 survey reflected conditions just prior to Peak SWE for the high-elevation lidar coverage, and mid-ablation conditions for the mid-elevation lidar coverage. The 1 February survey reflected mid-winter snowpack conditions in both areas.

Finally, we tested how well snow depths could be estimated from the SfM data for areas outside the SfM plots (but within the lidar coverages) on these different dates using the relationships between the late-winter SfM data and the mid-winter (1 February 2017) lidar data. These relationships were modeled using a third order polynomial fit (which was chosen based on the data analysis described above), and the regression parameters were applied to the 1 February 2017 lidar data to generate the predictions. These simulated maps were then compared to the actual lidar observations on 7 March 2017 and 4 March 2019.

#### **3. Results**

#### *3.1. Comparison Between SfM, Field-Based, and Lidar Snow Depths*

Overall, there is good agreement between the lidar and the SfM snow depth maps in the montane plots, which generally have sparse canopies. Figure 3a–i shows that most major snow depth features are consistently represented in both maps, such as the areas with deep snow (which usually correspond to north-facing aspects), and the extremely shallow snow on south-facing sides of large tree stands. Overall, there are high squared correlation coefficient (R<sup>2</sup> ) values (0.78 to 0.89), relatively low Root Mean Squared Error (RMSE) values (8.5 to 9.4 cm) and small average differences (SfM average depths are 1.0 to 3.5 cm shallower) between the SfM and lidar data (Table 2). The coefficient of variation (CV) differences between the maps are also relatively low (0.01–0.04). At the same time, the agreement between the SfM and field measured snow depths is comparable to that between the lidar and field measured snow depths for the montane plots (Table 3, Figure S1). Although R<sup>2</sup> values between the SfM data and ground measurements (0.60–0.90) are a little lower than those for the lidar data (0.68–0.94), SfM RMSEs (8.6–13.4 cm) are similar to lidar RMSEs (8.5–12.4 cm). Both the SfM and lidar data are relatively unbiased with respect to ground observations for the montane plots.

There is generally a lower correspondence between the lidar and SfM snow depth maps for the dense forest plots. Major features can clearly be seen in both sets of maps (e.g., the road on the northern end of the plot as well as the snowbanks on either side of the road), though other details in the forest are less consistent (Figure 3j–r). For example, the SfM maps show more pockets of shallow snow (especially on 1 February 2017). Compared to the montane plots, the dense forest plot has lower R<sup>2</sup> values (0.23 to 0.57), higher RMSEs (12.4 to 24.0 cm), and higher CV differences (0.09 to 0.26; with the SfM data having higher CVs than the lidar data; Table 2). At the same time, though, average depth differences are only a little larger than those observed in the montane plots (3.8 to 6.3 cm). Compared to the ground observations, the lidar generally performs better than the SfM data, though the performance of the SfM

data is also highly variable (Table 3, Figure S1) for the dense forest plots. RMSEs between SfM and field measured snow depths range from 8.8 and 17.2 cm (vs. 8.3 to 10.3 cm for the lidar data), and R<sup>2</sup> values are 0.26 to 0.29 (vs 0.33 to 0.62 for the lidar data). However, the SfM data has lower biases, compared to the ground observations, than the lidar data at the dense forest plot (−0.4 to 2.1 cm vs 0.8 to 5.8 cm). performance of the SfM data is also highly variable (Table 3, Figure S1) for the dense forest plots. RMSEs between SfM and field measured snow depths range from 8.8 and 17.2 cm (vs. 8.3 to 10.3 cm for the lidar data), and R2 values are 0.26 to 0.29 (vs 0.33 to 0.62 for the lidar data). However, the SfM data has lower biases, compared to the ground observations, than the lidar data at the dense forest plot (−0.4 to 2.1 cm vs 0.8 to 5.8 cm).

**Table 2.** Agreement statistics between lidar and SfM snow depth data for the paired airborne lidar–SfM surveys. Statistics are for the outlined areas for each plot in Figure 3. For the thinning comparison plot (Figure 3s–u), there is a higher correspondence between the lidar and SfM maps than for the dense forest plot. The thinning comparison plot also has very little


**Figure 3.** *Cont*.

*Remote Sens.* **2020**, *12*, 2311 9 of 24

**Figure 3.** Maps showing lidar snow depths, SfM snow depths, and their differences (SfM-lidar) for the paired airborne lidar–RPAS SfM surveys for the montane meadow plot on 1 February 2017 (**a**–**c**) and 4 March 2019 (**d**–**f**); the montane valley plot on 4 March 2019 (**g**–**i**); the dense forest on 1 February 2017 (**j**–**l**), 7 March 2017 (**m**–**o**), and 4 March 2019 (**p**–**r**); and thinning comparison plot on 4 March 2019 (**s**–**u**). A canopy hillshade effect is added to show the locations of trees. **Figure 3.** Maps showing lidar snow depths, SfM snow depths, and their differences (SfM-lidar) for the paired airborne lidar–RPAS SfM surveys for the montane meadow plot on 1 February 2017 (**a**–**c**) and 4 March 2019 (**d**–**f**); the montane valley plot on 4 March 2019 (**g**–**i**); the dense forest on 1 February 2017 (**j**–**l**), 7 March 2017 (**m**–**o**), and 4 March 2019 (**p**–**r**); and thinning comparison plot on 4 March 2019 (**s**–**u**). A canopy hillshade effect is added to show the locations of trees.

**Table 3.** Agreement statistics between lidar/SfM and field measured snow depth data for the paired airborne lidar–surveys.


Montane Valley (4 March 2019) 63.9 62.9 0.40 0.39 0.89 8.5 Dense Forest (1 February 2017) 54.3 48.0 0.29 0.55 0.23 24.0 Dense Forest (7 March 2017) 30.7 26.9 0.61 0.70 0.34 17.6 Dense Forest (4 March 2019) 34.9 29.6 0.45 0.55 0.57 12.4 Thinning Comparison (4 March 2019) 9.5 10.7 0.90 0.80 0.41 7.4 For the thinning comparison plot (Figure 3s–u), there is a higher correspondence between the lidar and SfM maps than for the dense forest plot. The thinning comparison plot also has very little snow (~10 cm on average) during the March 2019 survey (the only snow survey performed at this site), yet both technologies show similar snow patterns. Overall, RMSE (7.4 cm) between the lidar and SfM data is lower than for the other surveys, R<sup>2</sup> (0.41) is higher than some of the dense forest plots surveys but lower than those in the montane plots (with the opposite true for CV differences), and the average snow depth difference (1.2 cm) is small compared to the other surveys. Compared to the ground survey data, the performance of the lidar and SfM data is similar (with R<sup>2</sup> values of 0.37 and 0.46 and RMSEs of 7.1–7.9 cm), again with a relatively small bias (1.9 cm). Overall, the thinning comparison plot has lower vegetation density than the dense forest plot, but higher vegetation density than the montane plots (Table 1).

The accuracy of these SfM data depends, in part, on the quality of the first guess snow depth maps (which are shown in Figure S2 Supplemental Materials). In general, these first guess maps are comparable, in magnitude to the SfM and lidar maps, shown in Figure 3, but they typically do not reflect as much variability as the SfM and lidar maps do. The overall agreement between theses maps is usually lower than that between the SfM and lidar maps (with the exception of the 1 February 2017 dense forest plot, R<sup>2</sup> values range from 0.09–0.33 lower, and RMSE values range from 0.4–3.6 cm higher). Note that for the forest thinning comparison plot, the snow depths were so shallow that a spatially variable first guess map was not needed. The first guess snow depth map showed virtually no spatial variability, due to the small snow depth signal compared to the uncertainty of these maps (Figure S2g). Nevertheless, it was included here to maintain methodological consistency with all other surveys.

Finally, note that the same methodology used here can be used to generate SfM snow depth maps that based on snow-off SfM point clouds instead of snow-off lidar point clouds (Figure S3). The agreement between the SfM and lidar snow depth maps is a little lower for these maps (compare Table 2 and Table S1). This discrepancy may be related to inconsistent ground filtering algorithms used for the lidar data (which was performed by QSI) and SfM data (see Section 2.3).

#### *3.2. SWE Monitoring Using SfM*

Due to the hydrological importance of quantifying water content of the snowpack, the snow density and SWE maps for each plot prepared using the methodology of Broxton et al. [54] (see Section 2.3) were useful for multi-temporal monitoring of snowpack. In this study, they were especially useful for the montane plots because this area is very important for predicting local water supplies. For this area, there were snow surveys for four subsequent water years near the time of peak SWE (in early March). Figure 4 shows spatial differences between snow depth, snow density, and SWE for this area from 2017–2020 (note that in March 2017, there was airborne lidar, but no SfM data in the area).

While the 2019 maps depict snowpack conditions just prior to peak SWE, the 2017 and 2020 maps reflect snowpack conditions that are further along in the melt season (note the relatively large areas with no snow on south-facing slopes), and the 2018 maps depict historically low snow conditions (as 2018 was a very dry and warm year). Despite the differences between snowpack conditions between 2018 and the other years, the 2018 data shows relatively deeper snowpack in the same areas as in the other sets of maps (e.g., on north-facing slopes in the montane valley plot and on the shaded sides of the montane meadow plot). There is a relatively high correlation between the 2017, 2019, and 2020 surveys (R<sup>2</sup> between the 2017 and 2019 survey data are 0.71 and 0.69 for depth and SWE, respectively, and the R<sup>2</sup> between the 2017 and 2020 surveys are 0.67 and 0.60 for depth and SWE, respectively). The R<sup>2</sup> between the 2017 and 2018 survey is lower, (R<sup>2</sup> = 0.36 for snow depth and 0.39 for SWE). Overall, the 2019 survey had the most snow, but snow variability (as measured by the CV of snow depth and SWE) was higher during 2017 (when there was more advanced snowmelt) and 2018 (when the snowpack was shallow; Table 4) surveys. Snow density was also lowest in 2018 (due to shallow snowpack) and highest in 2020 (as the survey occurred after a long period of warm, dry weather).

**Figure 4.** Maps of snow depth, snow density, and SWE for the montane plots near-maximum snow accumulation on 6 March 2017 (**a**–**c**), 5 March 2018 (**d**–**f**), 4 March 2018 (**g**–**i**), and 4 March 2020 (**j**–**l**). A canopy hillshade effect is added to show the locations of trees and contours (contour interval = 2m) are added to help interpret the topography of the area. **Figure 4.** Maps of snow depth, snow density, and SWE for the montane plots near-maximum snow accumulation on 6 March 2017 (**a**–**c**), 5 March 2018 (**d**–**f**), 4 March 2018 (**g**–**i**), and 4 March 2020 (**j**–**l**). A canopy hillshade effect is added to show the locations of trees and contours (contour interval = 2 m) are added to help interpret the topography of the area.

**Table 4.** Averages and coefficient of variation (CV) for the snow depth, snow density, and SWE maps in Figure 5 (for areas that have data for all years). For 7 March 2017, statistics are derived from the airborne lidar survey (as no SfM was flown for this area on this date), and for the other dates, they are derived from the SfM surveys. **Table 4.** Averages and coefficient of variation (CV) for the snow depth, snow density, and SWE maps in Figure 5 (for areas that have data for all years). For 7 March 2017, statistics are derived from the airborne lidar survey (as no SfM was flown for this area on this date), and for the other dates, they are derived from the SfM surveys.


4 March 2020 56.7 0.40 0.36 0.04 20.2 0.38

#### *3.3. Using Airborne Lidar to Extend the SfM Data*

*3.3. Using Airborne Lidar to Extend the SfM Data*  Figure 4 shows that while there are differences between the snowpack distributions for different years (e.g., between the peak-SWE conditions on 4 March 2019 vs the post-peak SWE conditions on 6 March 2017), there are also similarities (e.g., areas with deeper snow in one scene tend to have deeper snow in other scenes). To further understand the consistency between the snowpack at different times across much larger (~100 km2) lidar coverages, we plotted the relationship between snow depth for three sets of lidar coverages (at different times in the snow season) for the mid- and high- elevation lidar domains (Figure 5). The mid-winter (1 February 2017) lidar data is used as the abscissa, and the late-winter (7 March 2017 and 4 March 2019) lidar data are used as the ordinates in these plots. Figure 5 (red dots) shows that the relationships between the snow depth data change between different dates. For example, while the relationship between 1 February 2017 and 4 March Figure 4 shows that while there are differences between the snowpack distributions for different years (e.g., between the peak-SWE conditions on 4 March 2019 vs the post-peak SWE conditions on 6 March 2017), there are also similarities (e.g., areas with deeper snow in one scene tend to have deeper snow in other scenes). To further understand the consistency between the snowpack at different times across much larger (~100 km<sup>2</sup> ) lidar coverages, we plotted the relationship between snow depth for three sets of lidar coverages (at different times in the snow season) for the mid- and high- elevation lidar domains (Figure 5). The mid-winter (1 February 2017) lidar data is used as the abscissa, and the late-winter (7 March 2017 and 4 March 2019) lidar data are used as the ordinates in these plots. Figure 5 (red dots) shows that the relationships between the snow depth data change between different dates. For example, while the relationship between 1 February 2017 and 4 March 2019 lidar data for the high-elevation lidar coverage is linear, the relationship between the 1 February 2017 and 7 March 2017 lidar data for the high-elevation lidar box is nonlinear. The former represents a comparison between mid-winter and conditions just prior to peak SWE while the latter represents a comparison between

dashed lines in Figure 5).

mid-winter and primarily post-peak SWE conditions. Despite this changing shape, the strength of these relationships is consistent in both cases (with R<sup>2</sup> values of ~0.9). For the mid-elevation lidar coverage, these relationships (which both represent a comparison of mid-winter conditions and mostly mid-ablation conditions) are nonlinear for both pairs of data, and are somewhat weaker, but still consistent (R<sup>2</sup> = ~0.77 in both cases). This suggests there a high degree of intra-seasonal and inter-annual relatability between snow depth patterns for our study sites at different times. Note that these relationships are similar if all lidar pixels are considered (thick red dashed lines in Figure 5) vs. if only open pixels (where SfM performs well) are considered (thin red dashed lines in Figure 5). Next, we tested how well the SfM data within the small study plots can capture these relationships. Also plotted in Figure 5 are the relationships between the 1 February 2017 lidar snow depth and the 7 March 2017 and 4 March 2019 SfM data for the dense vegetation plot (for the midelevation panels) and for the montane plots (for the high-elevation panels), considering only open pixels (where the SfM data perform well). Note that there was no SfM survey for the montane plots on 7 March 2017. The data follow similar trends to those found for the larger lidar coverages (just a little flatter; Figure 5), though with lower R2 values. This may be due to the lower quality of the SfM data in some environments (e.g., the dense forest plot), as well as the smaller dynamic range represented in these small plots compared with the overall lidar coverages (note that the dense forest plot, in particular, is flat with fairly uniform forest cover across most of the plot).

*Remote Sens.* **2020**, *12*, 2311 12 of 24

2019 lidar data for the high-elevation lidar coverage is linear, the relationship between the 1 February 2017 and 7 March 2017 lidar data for the high-elevation lidar box is nonlinear. The former represents a comparison between mid-winter and conditions just prior to peak SWE while the latter represents a comparison between mid-winter and primarily post-peak SWE conditions. Despite this changing shape, the strength of these relationships is consistent in both cases (with R2 values of ~0.9). For the mid-elevation lidar coverage, these relationships (which both represent a comparison of mid-winter conditions and mostly mid-ablation conditions) are nonlinear for both pairs of data, and are somewhat weaker, but still consistent (R2 = ~0.77 in both cases). This suggests there a high degree of intra-seasonal and inter-annual relatability between snow depth patterns for our study sites at

**Figure 5.** Relationships between 1 February 2017 lidar snow depth and 7 March and 4 March 2019 lidar (red dots, for the entire lidar coverages) and SfM (blue dots, for the SfM survey plots, based on open pixels) snow depth data for the mid-elevation (**a**,**c**) and high-elevation (**b**,**d**) lidar coverages. The dashed lines show the third order polynomial regressions for each data series (for the lidar regression, additional lines have been added to show the similarity of the comparisons when using only open **Figure 5.** Relationships between 1 February 2017 lidar snow depth and 7 March and 4 March 2019 lidar (red dots, for the entire lidar coverages) and SfM (blue dots, for the SfM survey plots, based on open pixels) snow depth data for the mid-elevation (**a**,**c**) and high-elevation (**b**,**d**) lidar coverages. The dashed lines show the third order polynomial regressions for each data series (for the lidar regression, additional lines have been added to show the similarity of the comparisons when using only open pixels (thin red dashed lines) and all pixels (thick red dashed lines). The open area SfM regressions (blue lines) are used to simulate the 4 March 2019 snow depths for the airborne lidar coverages in Figures 6 and 7.

Next, we tested how well the SfM data within the small study plots can capture these relationships. Also plotted in Figure 5 are the relationships between the 1 February 2017 lidar snow depth and the 7 March 2017 and 4 March 2019 SfM data for the dense vegetation plot (for the mid-elevation panels) and for the montane plots (for the high-elevation panels), considering only open pixels (where the SfM data perform well). Note that there was no SfM survey for the montane plots on 7 March 2017. The data follow similar trends to those found for the larger lidar coverages (just a little flatter; Figure 5), though with lower R<sup>2</sup> values. This may be due to the lower quality of the SfM data in some environments (e.g., the dense forest plot), as well as the smaller dynamic range represented in these small plots compared with the overall lidar coverages (note that the dense forest plot, in particular, is flat with fairly uniform forest cover across most of the plot).

Figures 6 and 7.

pixels (thin red dashed lines) and all pixels (thick red dashed lines). The open area SfM regressions (blue lines) are used to simulate the 4 March 2019 snow depths for the airborne lidar coverages in

The strength of the relationships between the lidar snow depth data on different dates, as well as the ability of the SfM data to capture these trends suggests that the SfM data can be used to predict snow depth distributions over the larger airborne lidar coverages, given that a lidar snow depth map already exists. To evaluate how well this extrapolation works, we used the relationships between the 1 February 2017 lidar, 7 March 2017 and 4 March 2019 SfM data, shown in Figure 5, applied to the 1 February 2017 lidar map, as described in Section 2.4. Figures 6 and 7, which show observed and simulated snow depth data for the mid- and high-elevation lidar coverages on 4 March 2019, illustrate that these extrapolations are fairly successful at estimating the observed lidar snow depth data. This can be seen for both the entire ~100 km2 lidar coverages, as well as for the < 1 km2 study plots. Note, however, that there can be significant differences between the observed and simulated snow depth

**Figure 6.** Observed lidar-based snow depths for 4 March 2019, simulated data (using the regression between the 4 March 2019 SfM data and 1 February 2017 lidar data and shown in Figure 5c), and the difference (simulated-observed) for the mid-elevation lidar coverage (**a**–**c**) and the area around the **Figure 6.** Observed lidar-based snow depths for 4 March 2019, simulated data (using the regression between the 4 March 2019 SfM data and 1 February 2017 lidar data and shown in Figure 5c), and the difference (simulated-observed) for the mid-elevation lidar coverage (**a**–**c**) and the area around the dense forest (**d**–**f**) and thinning-comparison (**g**–**i**) plots. Note that the simulated depths are for the area covered by the 1 February 2017 lidar, while the March 2019 lidar has a different footprint. The small dark boxes in (**a**–**c**) show the area covered by the inset maps (**d**–**i**) corresponding to the plots in Figure 1.

The strength of the relationships between the lidar snow depth data on different dates, as well as the ability of the SfM data to capture these trends suggests that the SfM data can be used to predict snow depth distributions over the larger airborne lidar coverages, given that a lidar snow depth map already exists. To evaluate how well this extrapolation works, we used the relationships between the 1 February 2017 lidar, 7 March 2017 and 4 March 2019 SfM data, shown in Figure 5, applied to the 1 February 2017 lidar map, as described in Section 2.4. Figures 6 and 7, which show observed and simulated snow depth data for the mid- and high-elevation lidar coverages on 4 March 2019, illustrate that these extrapolations are fairly successful at estimating the observed lidar snow depth data. This can be seen for both the entire ~100 km<sup>2</sup> lidar coverages, as well as for the < 1 km<sup>2</sup> study plots. Note, however, that there can be significant differences between the observed and simulated snow depth patterns in areas that have undergone substantial vegetation changes (such as the thinning comparison plot in Figure 6g–i, which underwent mechanical thinning between the 2017 and 2019 snow surveys).

dense forest (**d**–**f**) and thinning-comparison (**g**–**i**) plots. Note that the simulated depths are for the area covered by the 1 February 2017 lidar, while the March 2019 lidar has a different footprint. The

**Figure 7.** Observed lidar-based snow depths for 4 March 2019, simulated data (using the regression between 4 March 2019 SfM data and 1 February 2017 lidar data shown in Figure 5d), and the difference (simulated-observed) for the high-elevation lidar coverage (**a**–**c)** and the area around the montane plots (**d–f)**. Note that the simulated depths are for the area covered by the 1 February 2017 lidar, while the March 2019 lidar has a different footprint. The small dark boxes in (**a**–**c**) show the area covered by the inset maps (**d**–**f**) corresponding to the plots in Figure 1. **Figure 7.** Observed lidar-based snow depths for 4 March 2019, simulated data (using the regression between 4 March 2019 SfM data and 1 February 2017 lidar data shown in Figure 5d), and the difference (simulated-observed) for the high-elevation lidar coverage (**a**–**c)** and the area around the montane plots (**d–f)**. Note that the simulated depths are for the area covered by the 1 February 2017 lidar, while the March 2019 lidar has a different footprint. The small dark boxes in (**a**–**c**) show the area covered by the inset maps (**d**–**f**) corresponding to the plots in Figure 1.

In general, spatial statistics from these simulated snow depth maps are also fairly similar to those from the observed lidar data. Table 5 shows statistics related to the distribution of snow depths (mean, cv, skewness) as well as measures of autocorrelation (measured by the fractal dimension of snow depths for distances that are smaller or larger than a well-known scale break that occurs at ~25- 30 m). Note that there was no SfM survey for the high-elevation coverage on 7 March 2017, so statistics for this date are given assuming a simulated map that would be obtained if the relationships between the lidar data (red lines in Figure 5) were captured. For the high-elevation coverage, the distribution parameters for the simulated maps are very close to those for the observed maps: The mean snow depth is within 5%, the CVs are ~10% lower, and the skewness is similar. Likewise, both the short- and long-range fractal dimensions are similar, with the latter above 2.9, suggesting spatially random variability for longer correlation distances, and the former ~2.7–2.8, indicating more spatial complexity at shorter correlation distances. The agreement between these statistics for the simulated and observed lidar maps is a little worse for the mid-elevation coverage, as the simulated maps had slightly lower (by ~10%) mean snow depths, lower (by ~25%) CV, but similar skewness. Compared with the high-elevation coverage, there were also slightly larger differences between the short-range In general, spatial statistics from these simulated snow depth maps are also fairly similar to those from the observed lidar data. Table 5 shows statistics related to the distribution of snow depths (mean, cv, skewness) as well as measures of autocorrelation (measured by the fractal dimension of snow depths for distances that are smaller or larger than a well-known scale break that occurs at ~25–30 m). Note that there was no SfM survey for the high-elevation coverage on 7 March 2017, so statistics for this date are given assuming a simulated map that would be obtained if the relationships between the lidar data (red lines in Figure 5) were captured. For the high-elevation coverage, the distribution parameters for the simulated maps are very close to those for the observed maps: The mean snow depth is within 5%, the CVs are ~10% lower, and the skewness is similar. Likewise, both the short- and long-range fractal dimensions are similar, with the latter above 2.9, suggesting spatially random variability for longer correlation distances, and the former ~2.7–2.8, indicating more spatial complexity at shorter correlation distances. The agreement between these statistics for the simulated and observed lidar maps is a little worse for the mid-elevation coverage, as the simulated maps had slightly lower (by ~10%) mean snow depths, lower (by ~25%) CV, but similar skewness. Compared with the high-elevation coverage, there were also slightly larger differences between the short-range fractal dimensions for the observed and simulated maps, suggesting that the simulated maps for the mid-elevation coverage were not as good at capturing small scale snow depth variability as for the high-elevation coverage.

Because the fine scale (1 m) spatial variability depicted in these simulated maps can be somewhat different than in the observed maps, we compared the observed and simulated maps at a variety of spatial scales, by averaging the 1 m pixels to larger pixel sizes (Figure 8). As expected, the simulated and observed maps have a closer fit for larger pixel sizes, with the largest increases occurring between 1 and 100 m, and smaller differences occurring after that. For the mid-elevation lidar coverage, the R<sup>2</sup> values increase from ~0.7–0.8 for 1 m pixels to > 0.9 for 100 m pixels, while RMSEs decrease from 10–15 cm for 1 m pixels to ~7–8 cm for 100 m pixels. For the high-elevation lidar coverage, the R<sup>2</sup> values increase from ~0.9 for 1 m pixels to > 0.95 for 100 m pixels, while RMSEs again decrease from 10–12 cm for 1 m pixels to ~6–7 cm for 100 m pixels. Figure 8 also shows the performance of simulated maps that would be obtained if the relationships between the lidar data (red lines in Figure 5) were captured perfectly. For these scenarios, there is some reduction in RMSE (by ~20–30%), suggesting that some of the error between the simulated (using SfM data) and observed snow depth data is the result of this relationship not being fully captured by the SfM data. Similar results can be obtained for predictions of SWE values (compare Figures S5—S8 with Figures 5–8).

**Table 5.** Mean, coefficient of variation (CV), and skewness, and short- and long-range fractal dimensions (D) for observed and simulated snow depth maps for the mid- and high-elevation lidar coverages for 7 March 2017 and 4 March 2019.


<sup>1</sup> The fractal dimensions (D) are estimated from the slope of log-log unidirectional semi-variogram plots as in Reference [63], where D = 3-b/2, and b is the slope of the log-log semi-variogram over a particular interval. The scale break that separates the short- and long-range segments (30 m for the mid-elevation coverage and 25 m for the high-elevation coverage) corresponds to a well-known scale break (e.g., References [64,65]) that separates rich spatial-complexity of snow depths at smaller scales, and more spatially random variability at larger-scales.

*Remote Sens.* **2020**, *12*, 2311 16 of 24

**Figure 8.** Performance of the simulated snow depth maps for the mid- and high-elevation lidar coverages on 7 March 2017 (**a–b**) and 4 March 2019 (**c–d**) for different pixel scales. The original 1 m maps are aggregated (by averaging) to the larger pixel scales. The solid lines depict the performance of the simulated maps, created using the relationships between the 1 February 2017 and subsequent SFM data (to assess the actual ability of the SfM data to predict snow depth distributions over the larger lidar domains). The dashed lines with open circle markers depict the performance of the simulated maps, created using the relationships between the 1 February 2017 and subsequent SFM data (to assess the potential ability of the SfM data to predict snow depth distributions over the larger lidar domains). **4. Discussion Figure 8.** Performance of the simulated snow depth maps for the mid- and high-elevation lidar coverages on 7 March 2017 (**a–b**) and 4 March 2019 (**c–d**) for different pixel scales. The original 1 m maps are aggregated (by averaging) to the larger pixel scales. The solid lines depict the performance of the simulated maps, created using the relationships between the 1 February 2017 and subsequent SFM data (to assess the actual ability of the SfM data to predict snow depth distributions over the larger lidar domains). The dashed lines with open circle markers depict the performance of the simulated maps, created using the relationships between the 1 February 2017 and subsequent SFM data (to assess the potential ability of the SfM data to predict snow depth distributions over the larger lidar domains).

between current and past snow depths that can be used to estimate snow depths for larger areas. In particular, we show that for our study sites, limited spatial extent RPAS-SfM snow depth observations (when combined with past lidar snow depth observations) can be used to generate fairly accurate estimates of snow depth across the lidar domains (having R2 and RMSE values of ~0.75–0.9 and 10–15 cm, respectively, as compared with observed lidar snow depth distributions). These estimates improve further when aggregating to larger pixel sizes (Figure 8). Note that even at the 1 m pixel scale, these values are comparable with those of the SfM snow depth maps themselves and a little better (higher R2, lower RMSE) than those for the first guess snow depth maps (compare Figure 8 with Tables 1 and S1). This implies that SfM should be useful for multitemporal snow monitoring that is applicable at basin-scales. This is important given the high costs of repeated airborne lidar acquisitions. With the exception of a very few high-budget operations (e.g., NASA's Airborne Snow Observatory [29]) this high cost makes it infeasible to collect many repeated snow-on lidar data for

#### **4. Discussion**

In this study, we show that RPAS-SfM monitoring of snowpack can be a useful tool to provide temporal snowpack monitoring. This monitoring provides accurate snow depth maps over small study plots that do not have too much forest cover, and also offers the ability to capture relationships between current and past snow depths that can be used to estimate snow depths for larger areas. In particular, we show that for our study sites, limited spatial extent RPAS-SfM snow depth observations (when combined with past lidar snow depth observations) can be used to generate fairly accurate estimates of snow depth across the lidar domains (having R<sup>2</sup> and RMSE values of ~0.75–0.9 and 10–15 cm, respectively, as compared with observed lidar snow depth distributions). These estimates improve further when aggregating to larger pixel sizes (Figure 8). Note that even at the 1 m pixel scale, these values are comparable with those of the SfM snow depth maps themselves and a little better (higher R2, lower RMSE) than those for the first guess snow depth maps (compare Figure 8 with Table 1 and Table S1). This implies that SfM should be useful for multitemporal snow monitoring that is applicable at basin-scales. This is important given the high costs of repeated airborne lidar acquisitions. With the exception of a very few high-budget operations (e.g., NASA's Airborne Snow Observatory [29]) this high cost makes it infeasible to collect many repeated snow-on lidar data for most sites. In addition, RPAS-SfM may be especially suitable for operational snowpack monitoring because the SfM data can be processed relatively quickly [39].

The success of the extrapolations shown here depends on two critical factors. First, they rely on the fact that there is a strong relationship between snow depth distributions at different times. We tested this by comparing the snow depth data across the mid- and high- elevation lidar coverages at three different times, representing a range of snowpack conditions, from mid-accumulation to mid-ablation (Figure 5). Even though the shape of these relationships changed considerably at different times (i.e., it was more linear when comparing snow depths within the accumulation season and nonlinear when comparing accumulation season to ablation season snow depths), their strengths were consistent at different times (R<sup>2</sup> ~0.77 and ~0.90 for the mid- and high-elevation lidar coverage, respectively). This suggests that they should be useful for predicting snow depth distributions for other times as well.

The second critical factor affecting the success of these extrapolations is the ability of the distributed SfM data to capture these relationships. Our SfM data were generally able to capture these relationships, however, they did a better job for the montane plots than for the dense forest plot. This can be seen by the higher agreement statistics (Figure 8) and smaller differences between statistics describing the spatial distribution and spatial autocorrelation (Table 5) of the observed and simulated maps for the high-elevation lidar coverage. This ability to reproduce these spatial statistics is particularly important for applications that are based on statistics from lidar snow data (such as snow-cover depletion models [66]). The lower agreement between the observed and simulated maps for the mid-elevation lidar coverage could be because the dense forest plot (from which the SfM data is extrapolated) has a flat topography and mostly uniform forest cover, and so contains a relatively small range of snow conditions. However, it could also be because SfM data quality is much poorer for densely forested conditions [20,67]. We tried to mitigate this by only considering pixels without an overhead canopy. While this improved the ability of the SfM data to capture these relationships, it also ignores under-canopy areas such as tree wells in their derivation. Nevertheless, this does not seem to have too much of an effect on the relationships themselves as they are only sensitive to snow depth changes between the survey dates, which occur similarly for both canopy-covered and open areas (note the similarity between the thick and thin red lines in Figure 8).

Although this method works well at our research sites, more research is needed to understand how well it works at other sites. It is well known that there is substantial similarity between snow distributions at similar points in the snow season (e.g., peak SWE) [68–73]. However, there are also substantial spatial differences between snow distributions during the accumulation and melt seasons [46,74–76]. This does not necessarily mean, though, that there are not strong relationships relating melt-season snow depth to accumulation-season snow depth (for our study sites; there *were*

strong relationships; Figure 5), especially since spatial patterns of snow disappearance are related to the distribution of snowpack at the start of the melt season [77].

One thing that is likely to lead to greater difficulties when using the pre-existing lidar observations of snow distributions is land cover change (such as forest thinning, wildfire, or normal forest growth) because these changes are known to alter the snow depth patterns [4,6,9,78]. We showed in Figure 6 that the observed and predicted snow depth distributions were different following forest thinning treatments at the forest thinning comparison site. Climate variability and climate change may also cause issues, since snow distributions for a particular year are influenced by differences in the sequence, timing and magnitudes of snowfall/accumulation events, redistribution (due to wind), interactions with canopy, and melt [50,79,80]. Nevertheless, the high correlations for the snow depth relationships, shown in Figure 5, and for the simulated snow depth maps resulting from these relationships (in Figures 6 and 7) suggest that these differences do not necessarily make it harder to simulate snow depth patterns for different years. That is, even though the shape of these relationships can change under changing climate conditions, it is unclear how their strength would change.

In this study, the SfM snow depth data, themselves are reasonably accurate, and under ideal conditions, they have comparable quality to airborne lidar. For example, at our montane plots, which have relatively gentle topography and sparse tree cover (average slope ~10% and average canopy cover~15%), the SfM-generated snow depth maps generally have high correspondence with the airborne lidar-generated snow depth maps (with R<sup>2</sup> ~0.75 to 0.85 between the two) and comparable agreement with ground observations (RMSE ~10 cm for both technologies). For comparison, most studies that evaluate the performance of SfM for snow depth mapping generally find RMSEs of ~5–15 cm in open settings [36,37,40,48,67,81]. SfM can even produce reasonable snowpack maps under very low snow conditions [37]. For example, at the thinning comparison plot in March 2019 (average snow depth ~10 cm), there is good correspondence between the RPAS-SfM, airborne lidar, and ground snowpack measurements (Figure 3 and Tables 2 and 3). The performance of SfM (and, to a lesser extent, lidar) is markedly lower under dense forest conditions [20,67]). At our dense forest plot (average canopy cover ~45%), the correspondence between the SfM data and the ground measurements is lower than between the lidar data and the ground measurements (RMSE ~10 to 20 cm for the SfM maps vs ~10 cm for the lidar maps). This performance difference is likely because SfM requires line of sight to collect visual data in order to reconstruct the ground surface, while lidar pulses can split and provide multiple returns, and thus, have a greater ability to penetrate dense tree canopies [27].

The quality of these SfM surveys depends on a variety of factors, including the conditions and parameters used during drone image acquisition, the type of environment being surveyed, as well as the success of each of the processing steps in Section 2.3. In terms of image acquisition, we found that our acquisition parameters (RPAS flight ~75 m above ground at ~5–7 m per second with ~75–80% overlap for the images, and using the RPAS's automatic camera exposure adjustments) were sufficient to capture imagery sufficient to produce good quality point clouds for our study plots. The most significant issues that we experienced were due to deep tree shadows. Although these were not a problem under sparse canopy conditions (in the montane plots), they caused problems for point reconstruction under dense canopy conditions (in the dense vegetation plot). In heavily shaded areas, the dark and poorly resolved ground surface caused spurious noise in the point clouds, which in turn led to problems with the ground filtering (already a substantial source of uncertainty [82–84]). Ultimately, this led to spurious snow depth variability for the dense forest plot (such as the artifacts seen in Figure 3).

The automated post-processing steps described in Section 2.3 were also quite important for producing high-quality snow depth maps. Since many of our point clouds were directly georeferenced with onboard GPS data, we relied upon these steps to ensure accurate georegistration of the SfM point clouds, relative to the existing snow-off data. While this approach is capable of producing accurately georeferenced point clouds [85], it increases the reliance on these post-processing steps to produce accurate snow depth maps. In particular, it relies on sufficiently accurate first guess snow depth maps

to add to the snow-off data so that they could be used to remove tilting and warping issues that sometimes affect SfM point clouds [86]. For our study sites, these maps (Figure S2) were reasonably accurate (typically having RMSEs of 10–15 cm, a little higher than the final SfM maps; compare Table 2 and Table S1), leading to the ability to use them reliably in our SfM post-processing steps.

Although the high-resolution snow depth mapping afforded by lidar and SfM are already some of the best ways to monitor distributed snowpack properties in mountain forests, taking the extra step of estimating SWE is enormously valuable because SWE is the snowpack variable that is of most interest in hydrological applications [28,61]. Broxton et al. [54] showed that it is important to consider snow density differences across space at our sites, because even though snow depth variability accounts for a majority of SWE variability, snow density can still show substantial spatial variability on a given date (varying by more than 75% across these research sites). This is in line with other studies finding substantial spatial variability of snow density [62,87–90]. Following [54], we use machine learning of how field snow density measurements relate to various lidar-derived physiographic variables to produce maps of snow density, which are then combined with the snow depth maps to produce maps of SWE (see Section 2.3). These maps validate fairly well with snow density / SWE observations at our research sites. The RMSEs of these snow density maps range from 0.024 to 0.033 g/cm<sup>3</sup> , and the RMSEs of the resulting SWE estimates range from ~2 to 4 cm [54]. For comparison, the RMSEs of the extrapolated SWE maps range from ~4-5 cm at the 1 m scale to ~2-3 cm at the 1000 m scale (Figure 8).

Overall, for multitemporal monitoring of snowpack where SfM is used as a tool to compliment airborne lidar, we would recommend:


The existence of snow-on lidar data makes these planning tasks easier as objective criteria (based on these data) can be used for site selection. The RPAS flight acquisition parameters (see Section 2.1) and processing steps (see Section 2.2) used in this study seemed to be adequate to produce good quality SfM snow maps, but it is also likely that additional improvements could be made. For example, using more exact positioning (e.g., using RTK GPS), which can provide cm level precision [91–93]), could help with model positioning and reducing the warping and tilting of the SfM models, thus reducing the importance of post-processing to correct the point cloud data. Also, the use of drone-based lidar would perform better than SfM in areas with thick canopy [20] (with the obvious drawback that it is much more expensive). Finally, using a fixed-wing aircraft that can cover larger areas may improve the ability to capture the snow spatial variability needed for basin-scale snow estimation, though this option may be limited in some denser forests, due to line-of-sight requirements in places such as the US.

#### **5. Conclusions**

Although airborne lidar has revolutionized our ability to make distributed measurements of snowpack in mountain forests, it is expensive and time-consuming to process, making it hard to use for real-time or multi-temporal snow monitoring. This study demonstrates that SfM based on multi-angle imagery from a multi-rotor RPAS and airborne lidar can provide complementary information for high-quality snowpack monitoring, as the use of RPAS allows for relatively low cost, on-demand snow monitoring over small plots, and airborne lidar provides monitoring over a much larger area and at a higher cost. The existence of snow-on lidar data may allow subsequent SfM data to be extrapolated to much larger areas by comparing the spatially distributed snow thicknesses from the SfM data with those of already-measured lidar snow depths. This would make SfM data much more useful for basin-scale hydrological applications such as Land Surface Model evaluation and water supply monitoring. At our study sites, using the relationships developed between the small plot SfM data and previously flown airborne lidar resulted in fairly accurate snowpack estimates for the larger domains (R<sup>2</sup> values between the extrapolated maps and observed lidar data that is used for verification are ~0.7–0.9, and RMSE's between the two are ~10–15 cm for a 1 m pixel scale, with even higher performance when aggregating the data to larger pixels). However, the methodology proposed here may not always be appropriate (e.g., after extensive land cover changes), and more research is needed to understand its effectiveness at other research sites. Ultimately, successful incorporation of SfM and airborne lidar will be important for cost-effective multitemporal monitoring of snowpack.

**Supplementary Materials:** Supplemental figures and tables are available online at http://www.mdpi.com/2072- 4292/12/14/2311/s1, Codes used to process the SfM data are available at https://github.com/broxtopd/SFM-Processing and snow depth maps are available at https://climate.arizona.edu/data/SfM/.

**Author Contributions:** Conceptualization, P.D.B. and W.J.D.v.L.; Data curation, P.D.B. and W.J.D.v.L.; Formal analysis, P.D.B.; Funding acquisition, W.J.D.v.L.; Investigation, P.D.B. and W.J.D.v.L.; Methodology, P.D.B.; Project administration, W.J.D.v.L.; Resources, W.J.D.v.L.; Software, P.D.B.; Supervision, W.J.D.v.L; Visualization, P.D.B.; Writing—original draft, P.D.B.; Writing—review & editing, W.J.D.v.L. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research is supported by the Salt River Project (SRP) Agricultural Improvement and Power District in Tempe, Arizona.

**Acknowledgments:** SfM data were generated from imagery acquired by the authors using a DJI Phantom 3 and 4 Pro RPAS. Airborne lidar data are derived from a snow-free lidar acquisition for the Four Forests Restoration Initiative (4Fri), and snow–on lidar acquisitions and field snow surveys acquired through funding provided by SRP. Computing resources are provided by the Arizona Remote Sensing Center. We would like to thank the many volunteers from the NRCS, USFS, USDA-ARS, University of Arizona, Northern Arizona University, and SRP who helped with our field data collections. We also thank Jeff Gillan for providing advice for acquiring and processing the SfM data.

**Conflicts of Interest:** The authors declare no conflict of interest. The funder had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

#### **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Use of UAV-Photogrammetry for Quasi-Vertical Wall Surveying**

#### **Patricio Martínez-Carricondo 1,2\*, Francisco Agüera-Vega 1,2 and Fernando Carvajal-Ramírez 1,2**


Received: 8 June 2020; Accepted: 9 July 2020; Published: 10 July 2020

**Abstract:** In this study, an analysis of the capabilities of unmanned aerial vehicle (UAV) photogrammetry to obtain point clouds from areas with a near-vertical inclination was carried out. For this purpose, 18 different combinations were proposed, varying the number of ground control points (GCPs), the adequacy (or not) of the distribution of GCPs, and the orientation of the photographs (nadir and oblique). The results have shown that under certain conditions, the accuracy achieved was similar to those obtained by a terrestrial laser scanner (TLS). For this reason, it is necessary to increase the number of GCPs as much as possible in order to cover a whole study area. In the event that this is not possible, the inclusion of oblique photography ostensibly improves results; therefore, it is always advisable since they also improve the geometric descriptions of break lines or sudden changes in slope. In this sense, UAVs seem to be a more economic substitute compared to TLS for vertical wall surveying.

**Keywords:** UAV; photogrammetry; 3D-model; surveying; vertical wall

#### **1. Introduction**

Topographical surveys of surfaces with high angles of inclination such as nearly vertical slopes on roads or motorways, as well as the survey on architectural facades, are a technical challenge, the main concern of which is ensuring the safety of the operators responsible for carrying out the survey. Technical equipment has traditionally been used to guarantee this safety by preventing the operator from having to access the survey area. Thus, for road slopes, for example, total stations without prisms have been used, or architectural facades have been counted, in addition, with the resource of the rectification of photographs. However, since terrestrial laser scanners (TLS) came on the market, their use has contributed to improving the effectiveness in these kinds of studies, as well as those in other fields (e.g., cartography, geographic information systems, spatial planning, industry, forestry). TLS measures the time of flight of an emitted laser pulse that is reflected off of an intervening feature and returned to the sensor, thus resulting in a range measurement [1]. Because lasers arrive directly at the surface of the object and are reflected from it, this technology can precisely acquire spatial coordinates with an error that depends on the range, which usually varies between 1 and 10 mm. However, the TLS is expensive, and there are times when its use is limited due to certain circumstances that can distort and introduce error in measurements, including penetration and diffused reflection of the beam [2], or shadows that produce gaps in the point cloud. At present, there is a trend to complement this technology with unmanned aerial vehicles (UAVs) carrying digital cameras.

In recent years there has been a growing interest in UAVs from the scientific community, as well as geomatics professionals and software developers, which has led to their use in increasing applications related to architecture and engineering [3,4]. In fact, UAVs were first used for military applications [5] and then for civilian purposes [6] such as precision agriculture [7,8], forestry studies [9,10], fire monitoring [11,12], cultural heritage and archaeology [13–15], traffic monitoring [16,17], environmental surveying [18,19], and 3D reconstruction [20–22]. UAV photogrammetry is gaining ground in the gap between traditional surveying methods and photogrammetric flights performed with conventional aircraft. Depending on the extent of the area, UAVs are more competitive because they offer greater flexibility while requiring less time to acquire data, and, additionally, they represent a significant cost reduction compared to the use of traditional aircraft [23].

The combination of computer vision and photogrammetry [24] has allowed great advances in the automation process by highlighting the use of images with different tilt angles and at different heights [25]. There are several software packages that allow photographs taken with conventional cameras to obtain 3D reconstruction by means of point clouds. The majority of these software packages are based on special algorithms, such as Structure-from-Motion (SfM) [26–28]. SfM is an algorithm that automatically reconstructs the geometry of the scene, the positions and the orientation from which the photographs were taken, without the need to establish a point network with known 3D coordinates [29,30]. SfM incorporates multi-view stereopsis (MVS) techniques [31], which derive a 3D structure from overlapping photography, acquired from multiple locations and angles [32], and applies them to a scale-invariant feature transform (SIFT) operator for key-point detection. This generates 3D point clouds from photographs. Contrary to classical aerial photogrammetry, which requires sophisticated flight planning and pre-calibration of cameras [33], SfM simplifies the process, thus eliminating the need for exhaustive planning or camera calibration, and allowing for the use of images from different cameras. The result of processing this algorithm is a point cloud without scale or orientation, whose georeferencing can be obtained by direct methods through the use of photographs with EXIF data or by indirect methods using ground control points (GCPs) [34]. There are numerous studies that analyze the effect of different parameters on the accuracy of products obtained by UAV photogrammetry [35–40]. Of all of them, the number of GCPs is of special importance [41], as is their distribution and the use of photographs with different inclination angles [42]. In summary, UAV photogrammetry has shown a great deal of development in recent years and is increasingly used in situations where other techniques are less efficient or simply not feasible. Therefore, it is necessary to continue developing specific methodologies to obtain accurate results using UAV photogrammetry in extreme topographic situations, such as dealing with quasi-vertical walls.

The goal of this study is to validate the specific use of point clouds obtained by UAV photogrammetry for the topographic survey of walls or facades that have an inclination close to vertical. For this purpose, 18 scenarios have been proposed that use different combinations of GCPs and photograph orientations and adopt adequate (or inadequate) GCP distributions. All of these scenarios have been compared with the one obtained through a TLS, resulting in certain parameters for UAV photogrammetry, with accuracy that is comparable to that obtained through TLS.

#### **2. Materials and Methods**

The flow followed in this work is shown in Figure 1. In summary, work begins with a global navigation satellite system (GNSS) survey of the dam edges and georeferencing targets. The points obtained on the dam edges are then used to interpolate seven profiles along the dam cross section that are then used to validate the point cloud obtained by the TLS. The targets are used indistinctly for the georeferencing of the 18 UAV photogrammetry projects and for quality control. Once the point clouds are obtained by UAV photogrammetry, they are compared with that obtained by TLS.

*Remote Sens.* **2020**, *12*, 2221 3 of 23

**Figure 1.** Workflow followed in this work. **Figure 1.** Workflow followed in this work. **Figure 1.** Workflow followed in this work.

#### *2.1. Study Site 2.1. Study Site*

*2.1. Study Site*  The study area was centered on the Isabel II Dam (1841–1857), located in the province of Almería (Spain) about 7 km from the village of Níjar (Figure 2). Construction on the dam began with the foundation works in 1841, although it was not inaugurated until 8 May 1850 without the completion of the canals or offices, which would last until 1857. The reservoir was never completely filled, and the project failed due to several faults in the calculations made in the hydrological and pluviometric The study area was centered on the Isabel II Dam (1841–1857), located in the province of Almería (Spain) about 7 km from the village of Níjar (Figure 2). Construction on the dam began with the foundation works in 1841, although it was not inaugurated until 8 May 1850 without the completion of the canals or offices, which would last until 1857. The reservoir was never completely filled, and the project failed due to several faults in the calculations made in the hydrological and pluviometric studies of the area. The study area was centered on the Isabel II Dam (1841–1857), located in the province of Almería (Spain) about 7 km from the village of Níjar (Figure 2). Construction on the dam began with the foundation works in 1841, although it was not inaugurated until 8 May 1850 without the completion of the canals or offices, which would last until 1857. The reservoir was never completely filled, and the project failed due to several faults in the calculations made in the hydrological and pluviometric studies of the area.

**Figure 2.** Location of the study site.

**Figure 2.** Location of the study site. However, this dam is one of the most spectacular, and simultaneously least known, elements of **Figure 2.** Location of the study site. However, this dam is one of the most spectacular, and simultaneously least known, elements of Spain's hydraulic heritage. It is one of the few examples of the great hydraulic work undertaken in However, this dam is one of the most spectacular, and simultaneously least known, elements of Spain's hydraulic heritage. It is one of the few examples of the great hydraulic work undertaken in the 19th century, as well a key world reference for stone arch-gravity dams [43].

Spain's hydraulic heritage. It is one of the few examples of the great hydraulic work undertaken in the 19th century, as well a key world reference for stone arch-gravity dams [43]. the 19th century, as well a key world reference for stone arch-gravity dams [43]. It was selected because of its downstream profile with walls that are very close to the vertical; It was selected because of its downstream profile with walls that are very close to the vertical; its total height at its central point 31 m above the riverbed, as is shown in Figure 3.

It was selected because of its downstream profile with walls that are very close to the vertical;

its total height at its central point 31 m above the riverbed, as is shown in Figure 3.

its total height at its central point 31 m above the riverbed, as is shown in Figure 3.

*Remote Sens.* **2020**, *12*, 2221 4 of 23

**Figure 3.** View perspective of the study area. **Figure 3.** View perspective of the study area.

#### *2.2. GNSS Surveying of Dam Edges and Ground Control Points 2.2. GNSS Surveying of Dam Edges and Ground Control Points*

Before performing the photogrammetric flights and TLS scanning, a traditional GNSS survey was carried out, materializing a total of 113 points distributed along the dam's downstream face, mostly on the edges of the steps, which allowed for their subsequent complete interpolation by means of cabinet work, as shown in Figure 4a. Of all these points, 17 were materialized by means of targets that allowed for their later viewing in photographs taken by the UAV. The targets used consisted of a red paper of size A3 (420 × 297 mm) with four quadrants, two of them black. Figure 4b shows an example of one of these targets. The three-dimensional (3D) coordinates of these targets were measured with a GNSS receiver operating in post-processing kinematic mode (PPK), with the base emitting corrections at a point near the dam, as shown in Figure 4c. Both rover and base GNSS receivers were Trimble R6 systems. The 3D coordinates of the base, corrected via the Trimble Centerpoint RTX Post-Processing Service, were 574909.418, 4093250.721, and 372.012 m (European Terrestrial Reference System 1989, ETRS89 and EGM08 geoid model). Before performing the photogrammetric flights and TLS scanning, a traditional GNSS survey was carried out, materializing a total of 113 points distributed along the dam's downstream face, mostly on the edges of the steps, which allowed for their subsequent complete interpolation by means of cabinet work, as shown in Figure 4a. Of all these points, 17 were materialized by means of targets that allowed for their later viewing in photographs taken by the UAV. The targets used consisted of a red paper of size A3 (420 × 297 mm) with four quadrants, two of them black. Figure 4b shows an example of one of these targets. The three-dimensional (3D) coordinates of these targets were measured with a GNSS receiver operating in post-processing kinematic mode (PPK), with the base emitting corrections at a point near the dam, as shown in Figure 4c. Both rover and base GNSS receivers were Trimble R6 systems. The 3D coordinates of the base, corrected via the Trimble Centerpoint RTX Post-Processing Service, were 574909.418, 4093250.721, and 372.012 m (European Terrestrial Reference System 1989, ETRS89 and EGM08 geoid model).

From the data obtained by GNSS, seven theoretical profiles were made along the entire downstream face of the dam, numbered P1 to P7. These theoretical profiles obtained by GNSS are shown in Figure 5.

P7.

2.3.1. Data Acquisition

Odra River (Southern Poland).

TLS.

*2.3. Topographic Surveying Using TLS* 

coordinates. The working guidelines are described in depth in [48].

**Figure 4.** (**a**) Surveying the edges of the dam using global navigation satellite system (GNSS); (**b**) example of target; (**c**) GNSS receiver base applied to this work. **Figure 4.** (**a**) Surveying the edges of the dam using global navigation satellite system (GNSS); (**b**) example of target; (**c**) GNSS receiver base applied to this work. *Remote Sens.* **2020**, *12*, 2221 6 of 23

**Figure 5.** (**a**) Plant view of theoretical profiles acquired by GNSS; (**b**) south view of seven profiles P1— **Figure 5.** (**a**) Plant view of theoretical profiles acquired by GNSS; (**b**) south view of seven profiles P1—P7.

In order to meet the objective of this research, it was necessary to obtain a complete point cloud of the study area that would allow its later comparison with the products obtained by UAV photogrammetry. To this end, a complete scan of the dam's downstream face was carried out using

The TLS provides a three-dimensional quasi-constant point cloud of the observed objects. The accuracy of this point cloud depends mainly on the distance between the scanner and the observed object. In recent years, the application of this equipment is increasing exponentially [44–47]. TLS measurements are based on the time elapsed by a laser ray emitted by the scanner and reflected by the object. From the time measurement, the distance is obtained and transformed into real-time

In this paper, a Trimble TX8 Scanner (Figure 6) was utilized for the topographical survey of the dam. This scanner measures almost 1 million points per second and its maximum scanning range is 120 m, but can extend up to 340 m under favorable conditions. The wavelength of the laser is 1.5 μm, and the scanning frequency is 1 MHz. Advertised measurement accuracy is ±2 mm, and the angular resolution is 0.07 mrad. It also includes an integrated HDR camera with 10 MP resolution. This scanner was used by [49] to measure the deflections of a technological suspension bridge above the

#### *2.3. Topographic Surveying Using TLS*

In order to meet the objective of this research, it was necessary to obtain a complete point cloud of the study area that would allow its later comparison with the products obtained by UAV photogrammetry. To this end, a complete scan of the dam's downstream face was carried out using TLS.

The TLS provides a three-dimensional quasi-constant point cloud of the observed objects. The accuracy of this point cloud depends mainly on the distance between the scanner and the observed object. In recent years, the application of this equipment is increasing exponentially [44–47]. TLS measurements are based on the time elapsed by a laser ray emitted by the scanner and reflected by the object. From the time measurement, the distance is obtained and transformed into real-time coordinates. The working guidelines are described in depth in [48].

#### 2.3.1. Data Acquisition

from Station 4.

In this paper, a Trimble TX8 Scanner (Figure 6) was utilized for the topographical survey of the dam. This scanner measures almost 1 million points per second and its maximum scanning range is 120 m, but can extend up to 340 m under favorable conditions. The wavelength of the laser is 1.5 µm, and the scanning frequency is 1 MHz. Advertised measurement accuracy is ±2 mm, and the angular resolution is 0.07 mrad. It also includes an integrated HDR camera with 10 MP resolution. This scanner was used by [49] to measure the deflections of a technological suspension bridge above the Odra River (Southern Poland). *Remote Sens.* **2020**, *12*, 2221 7 of 23

**Figure 6.** Trimble TX-8 Scanner. **Figure 6.** Trimble TX-8 Scanner.

Due to the geometry of the study area, it was not possible to capture the entire wall from a single location. Therefore, the measurements for the TLS point cloud acquisition were taken from four different station points, as shown in Figure 7. Due to the geometry of the study area, it was not possible to capture the entire wall from a single location. Therefore, the measurements for the TLS point cloud acquisition were taken from four different station points, as shown in Figure 7.

**Figure 7.** (**a**) View of the 4 stations from which the complete scanning was carried out and 5 points for georeferencing; (**b**) view from Station 1; (**c**) view from Station 2; (**d**) view from Station 3; (**e**) view

Station 3 had 96,323,508 points, and Station 4 had 274,417,074 points.

The point cloud for Station 1 had a total of 57,348,448 points. Station 2 had 119,268,180 points,

different station points, as shown in Figure 7.

**Figure 6.** Trimble TX-8 Scanner.

**Figure 7.** (**a**) View of the 4 stations from which the complete scanning was carried out and 5 points for georeferencing; (**b**) view from Station 1; (**c**) view from Station 2; (**d**) view from Station 3; (**e**) view from Station 4. **Figure 7.** (**a**) View of the 4 stations from which the complete scanning was carried out and 5 points for georeferencing; (**b**) view from Station 1; (**c**) view from Station 2; (**d**) view from Station 3; (**e**) view from Station 4.

The point cloud for Station 1 had a total of 57,348,448 points. Station 2 had 119,268,180 points, Station 3 had 96,323,508 points, and Station 4 had 274,417,074 points. The point cloud for Station 1 had a total of 57,348,448 points. Station 2 had 119,268,180 points, Station 3 had 96,323,508 points, and Station 4 had 274,417,074 points.

#### 2.3.2. Data Processing

Processing of the point clouds obtained by TLS was carried out using Trimble RealWorks software. The first step carried out once the clouds were loaded was the relative registration between them. To obtain a fine registration of all TLS datasets, cloud-to-cloud registration was used. This required searching for common tie points between each pair of clouds. Then, the four point clouds were merged into a single point cloud. This processing methodology was used by [50] who reported errors between 11–19 mm, [51] with an error of 13 mm, or [52] where a maximum error of 42 mm was found between scans from two different stations. The results of the registration of this work are shown in Table 1.


**Table 1.** Result of the registration of the 4 scanning stations.

Once registration of the four scanning stations was complete, the indirect georeferencing of the merged point cloud was carried out in absolute coordinates using the ETRS89 UTM zone 30 N reference system. For this purpose, five control points were used, measured by GNSS, the position of which is shown in Figure 7a. The results of the indirect georeferencing adjustment are shown in Table 2.


**Table 2.** Result of the indirect georeferencing of the project.

The final TLS point cloud resulting from combining the four individual point clouds can also be seen in Figure 7a. To reduce the size of the resulting point cloud, limits were established according to the area of interest, resulting in a point cloud of 74,447,399 points.

#### *2.4. Topographic Surveying Using UAV Photogrammetry*

#### 2.4.1. Image Capture

The images used in this study were captured by a rotary wing with four rotors, DJI Phantom 4 Pro UAV. This equipment has a navigation system that uses GPS and GLONASS. In addition, it is equipped with front, rear, and lower vision systems that allow it to detect surfaces with defined patterns and adequate lighting and avoid obstacles with a range between 0.2 and 7 m. The Phantom 4 RGB camera is equipped with a one-inch, 20-megapixel (5472 × 3648) sensor and has a manually adjustable aperture (from F2.8 to F11). The lens has a fixed focal length of 8.8 mm and a horizontal field of view (FOV) of 84◦ . The acquisition of photographs from a UAV takes place via airborne photogrammetry from the aircraft, whereby a block of photographs is taken from parallel flight lines that are flown in a snake pattern at a stable altitude with constant overlap and a vertical camera angle (90◦ ) [53]. However, the integration of oblique photographs can reduce the systematic deformation resulting from inaccurate calculations to determine the internal geometry of the camera in modern SfM–MVS photogrammetry [54,55]. There is evidence that oblique photographs contribute to the integrity of the point cloud reconstruction [51,56]. For example, [51] studied the influence of the angle of the oblique images, ranging from 0–35◦ , and

compared it with the TLS. They concluded that oblique images are indispensable to the improvement of accuracy and the decreasing of systematic errors in the endpoint cloud. The results also suggested that oblique camera angles of between 20 and 35◦ increased accuracy by almost 50%, in relation to blocks with only nadir images. integrity of the point cloud reconstruction [51,56]. For example, [51] studied the influence of the angle of the oblique images, ranging from 0–35°, and compared it with the TLS. They concluded that oblique images are indispensable to the improvement of accuracy and the decreasing of systematic errors in the endpoint cloud. The results also suggested that oblique camera angles of between 20 and 35° increased accuracy by almost 50%, in relation to blocks with only nadir images.

SfM–MVS photogrammetry [54,55]. There is evidence that oblique photographs contribute to the

*Remote Sens.* **2020**, *12*, 2221 9 of 23

patterns and adequate lighting and avoid obstacles with a range between 0.2 and 7 m. The Phantom 4 RGB camera is equipped with a one-inch, 20-megapixel (5472 × 3648) sensor and has a manually adjustable aperture (from F2.8 to F11). The lens has a fixed focal length of 8.8 mm and a horizontal field of view (FOV) of 84°. The acquisition of photographs from a UAV takes place via airborne photogrammetry from the aircraft, whereby a block of photographs is taken from parallel flight lines that are flown in a snake pattern at a stable altitude with constant overlap and a vertical camera angle (90°) [53]. However, the integration of oblique photographs can reduce the systematic deformation

The images were obtained from two independent flights. The first was carried out in automatic pilot mode through the DJI GS Pro application, and a total of 207 nadir photographs were obtained in 13 passes. The flight height was set at 36 m above the dam crest, which is equivalent to a ground sample distance (GSD) of 1.3 cm. To obtain side and forward overlaps of 65% and 80%, respectively, the camera took a shot every two seconds. The second flight was made in manual mode to obtain oblique photographs, in order to provide photographic capture of all the details of the dam´s geometry. This flight was carried out at an approximate distance of 30 m from the downstream face of the dam and was executed in seven different passes, parallel to the dam and at varying altitudes. A total of 372 photographs (including those of the control building and the guard, which are not relevant to this study) were taken. In addition, due to the camera's wide FOV, it was possible to cover the entire dam without going too far away from it. To avoid the appearance of the horizon in the photographs, an inclination angle of about 45◦ was adopted. No frontal photographs were taken, as much of the study area had horizontal surfaces. A total of 579 photographs with different points of view and scales were used to process the photogrammetric projects. Figure 8 shows the image overlap and camera locations for both flights. The images were obtained from two independent flights. The first was carried out in automatic pilot mode through the DJI GS Pro application, and a total of 207 nadir photographs were obtained in 13 passes. The flight height was set at 36 m above the dam crest, which is equivalent to a ground sample distance (GSD) of 1.3 cm. To obtain side and forward overlaps of 65% and 80%, respectively, the camera took a shot every two seconds. The second flight was made in manual mode to obtain oblique photographs, in order to provide photographic capture of all the details of the dam´s geometry. This flight was carried out at an approximate distance of 30 m from the downstream face of the dam and was executed in seven different passes, parallel to the dam and at varying altitudes. A total of 372 photographs (including those of the control building and the guard, which are not relevant to this study) were taken. In addition, due to the camera's wide FOV, it was possible to cover the entire dam without going too far away from it. To avoid the appearance of the horizon in the photographs, an inclination angle of about 45° was adopted. No frontal photographs were taken, as much of the study area had horizontal surfaces. A total of 579 photographs with different points of view and scales were used to process the photogrammetric projects. Figure 8 shows the image overlap and camera locations for both flights.

**Figure 8.** (**a**) Image overlaps and camera locations from nadiral flight; (**b**) image overlaps and camera locations from oblique flight. **Figure 8.** (**a**) Image overlaps and camera locations from nadiral flight; (**b**) image overlaps and camera locations from oblique flight.

#### 2.4.2. Image Processing

2.4.2. Image Processing The photogrammetric projects were executed using Agisoft Metashape Professional software, version 1.6.1.10009. This software is based on the SfM algorithm and runs in three independent steps. In the first step, all images are aligned by identifying and tying common points. During this process, the software estimates the camera's internal and external orientation parameters, including non-The photogrammetric projects were executed using Agisoft Metashape Professional software, version 1.6.1.10009. This software is based on the SfM algorithm and runs in three independent steps. In the first step, all images are aligned by identifying and tying common points. During this process, the software estimates the camera's internal and external orientation parameters, including non-linear radial distortion. The software only needs the focal length value, which it obtains directly from the EXIF data of the photographs. This was carried out with the PhotoScan accuracy set to "medium" in order to reduce the processing time. The result of this step is the camera position and orientation, as well as the internal calibration parameters and the 3D relative coordinates of a sparse point cloud in the area of interest. In the second step, the sparse point cloud is referred to an absolute coordinate system, in our case ETRS89 UTM 30N, and the point cloud is densified, once the optimization and adjustment of the camera model have been completed. This was also carried out with the PhotoScan accuracy set to "medium". This point cloud needs to be cleaned up to eliminate all of the wild points not belonging in the model, which is done manually. The result is a highly detailed point cloud. From this point

cloud, a mesh can be obtained; in this study, this was done using the height field method. In the third step, texture is applied to the mesh, and finally the orthophoto, digital surface model (DSM) and point cloud can be exported, in \*.las formats. The bundle adjustment can be carried out using at least three ground control points (GCPs), but more accurate results can be obtained if more GCPs are used; thus, it is recommended that more GCPs be used to obtain optimal accuracy [57,58]. In this study, 17 targets placed on the dam were used as GCPs to georeference the project. The remaining targets not used in the bundle adjustment were used as control points (CPs) to evaluate the photogrammetric project accuracy, according to the root mean square error (RMSE) formula, described in [35]. There are numerous studies in which it has been corroborated that increasing the number of GCPs improves the accuracy of results of UAV photogrammetric projects. For example, [59] needed approximately six to seven GCPs to obtain accuracy of about 15.6 cm, and specified that it was necessary to increase the number of GCPs to reduce error. In [60], a study was conducted to analyze the optimal number of GCPs for volumetric measurements in open pit mines; for accuracies of about 5.0 cm, they concluded that about 15 GCP/km<sup>2</sup> were needed. In [61], an extensive study was carried out with over 3000 combinations, concluding that an increase in GCPs improves accuracy, with a limited project ground sample distance (GSD). In addition, [62] studied the influence of GCP distribution on the accuracy obtained in photogrammetric projects and concluded that the vertical error is proportional to distance, to the nearest GCP. In their case, they obtained vertical RMSEs that ranged from 15.6 cm for three GCPs to 5.9 cm for 101 GCPs.

In total, 18 photogrammetric projects were carried out, differentiating the types of orientation used in the photographs (nadiral, oblique, or both), the number of GCPs used for georeferencing (3, 5, and 7), and an adequate or inadequate distribution of GCPs, according to [41], who established that an adequate distribution of GCPs is one in which GCPs are arranged at the edge of the study area, while the interior area is covered homogeneously with GCPs; distributions of GCPs that do not meet these criteria are considered inadequate. Table 3 shows a summary of the executed photogrammetric projects, and Figure 9 shows the different combinations of numbers of GCPs and type of distribution of the GCPs.


**Table 3.** Summary of photogrammetric projects carried out.

*Remote Sens.* **2020**, *12*, 2221 12 of 23

**Figure 9.** Number and distribution of ground control points (GCPs). Blue plots represent GCPs and red plots represent control points (CPs). (**a)** Three GCPs and adequate distribution; (**b**) three GCPs and inadequate distribution; (**c)** five GCPs and adequate distribution; (**d**) five GCPs and inadequate distribution; (**e**) seven GCPs and adequate distribution; (**f**) seven GCPs and inadequate distribution. **Figure 9.** Number and distribution of ground control points (GCPs). Blue plots represent GCPs and red plots represent control points (CPs). (**a)** Three GCPs and adequate distribution; (**b**) three GCPs and inadequate distribution; (**c)** five GCPs and adequate distribution; (**d**) five GCPs and inadequate distribution; (**e**) seven GCPs and adequate distribution; (**f**) seven GCPs and inadequate distribution.

#### 2.4.3. Accuracy Assessment 2.4.3. Accuracy Assessment

The evaluation of the accuracy was carried out by measuring the values of RMSEX, RMSEY, and RMSEZ, as well as RMSET (total error) measured on each CP. The evaluation of the accuracy was carried out by measuring the values of RMSEX, RMSEY, and RMSEZ, as well as RMSE<sup>T</sup> (total error) measured on each CP.

#### *2.5. Point Cloud Management*

*2.5. Point Cloud Management*  Prior to the above, and in order to compare the profiles obtained from GNSS with the point cloud obtained from TLS, the cloud-to-mesh (C2M) tool, offered by CloudCompare v2.8 [63], was used. The mesh was first obtained from the point cloud of the TLS, using the Dalaunay 2.5 tool, and once obtained, the C2M algorithm was applied to corroborate the adequate georeferencing of the TLS Prior to the above, and in order to compare the profiles obtained from GNSS with the point cloud obtained from TLS, the cloud-to-mesh (C2M) tool, offered by CloudCompare v2.8 [63], was used. The mesh was first obtained from the point cloud of the TLS, using the Dalaunay 2.5 tool, and once obtained, the C2M algorithm was applied to corroborate the adequate georeferencing of the TLS cloud and its validity for use as a reference for comparison against photogrammetric projects.

cloud and its validity for use as a reference for comparison against photogrammetric projects. A multiscale model-to-model cloud comparison (M3C2) tool was used to compare the point clouds generated by the photogrammetric projects with the point cloud generated by the TLS. This tool calculates in a robust way, the distance, positive or negative, between two point clouds [64]. The A multiscale model-to-model cloud comparison (M3C2) tool was used to compare the point clouds generated by the photogrammetric projects with the point cloud generated by the TLS. This tool calculates in a robust way, the distance, positive or negative, between two point clouds [64].

The principles of operation of this tool are described in [41]. M3C2 output consists of, amongst other data, a text file with the *x-*, *y-*, and *z-*coordinates for each point of the reference cloud and the 3D distance associated with the comparison. All data can be displayed using a color scale to highlight the resulting scalar field.

#### **3. Results**

#### *3.1. Validation of Data Derived From TLS in Comparison With Theoretical Profiles Obtained by GNSS*

Table 4 shows a summary of the comparison between the profiles obtained by GNSS and the model derived from the point cloud obtained by TLS. For the seven profiles studied, the average of the mean differences is below 0.03 cm with a standard deviation of less than 3 cm, which ensures the correct georeferencing of the TLS point cloud.


**Table 4.** Summary of comparison between profiles obtained by GNSS and data derived from a terrestrial laser scanner (TLS).

Figure 10a graphically represents the differences across the seven profiles obtained by GNSS with respect to the model derived from the point cloud obtained by TLS. Figure 10b shows an example of the error distribution for Profile 2, in which it can be seen that most of the differences are close to 0 and that, in some cases, larger errors appeared in most cases due to the presence of shrubs growing on the dam face, as shown in Figure 10c.

*Remote Sens.* **2020**, *12*, 2221 14 of 23

**Figure 10.** (**a**) Errors between GNSS and TLS along the seven profiles studied (units in meters); (**b**) errors in Profile 2 (units in meters); (**c**) example of bushes growing on the face of the dam. **Figure 10.** (**a**) Errors between GNSS and TLS along the seven profiles studied (units in meters); (**b**) errors in Profile 2 (units in meters); (**c**) example of bushes growing on the face of the dam. **Figure 10.** (**a**) Errors between GNSS and TLS along the seven profiles studied (units in meters); (**b**) errors in Profile 2 (units in meters); (**c**) example of bushes growing on the face of the dam.

#### *3.2. RMSE UAV-PhotogrAmmetric Projects 3.2. RMSE UAV-PhotogrAmmetric Projects*

*3.2. RMSE UAV-PhotogrAmmetric Projects*  Figure 11 shows the precision results obtained in the 18 photogrammetric projects through the evaluation of the RMSE. Figure 11 shows the precision results obtained in the 18 photogrammetric projects through the evaluation of the RMSE. Figure 11 shows the precision results obtained in the 18 photogrammetric projects through the evaluation of the RMSE.

From the analysis of the data obtained, it is clear that an adequate distribution of GCPs is important for the minimization of errors. The results obtained were in line with those obtained by [41]. Therefore, the average error in the projects with inadequate distribution was around 8.5 cm, while in the projects with adequate distribution, it was in an average of 4.0 cm, which represents an improvement of more than 50%. As indicated in [57], increasing the number of GCPs improves the accuracy of photogrammetric projects, regardless of whether the distribution of GCPs is adequate or not. In this study, a slight worsening was found for 7 GCPs against 5 GCPs. Thus, for 3 GCPs the average error was 7.1, for 5 GCPs it was 5.5, and for 7 GCPs it was 6.2 cm. This alteration was only found for the projects with inadequate distribution, while for the projects with adequate distribution, the error decreased as the number of GCPs increased. Thus, for 3 GCPs the average error was 4.8, for 5 GCPs it was 4.0, and for 7 GCPs it was 3.3 cm. Regarding the orientation of the photographs, an average error of 7.9 was obtained for projects with nadiral photographs, 5.0 for projects with oblique photographs, and 6.0 cm for projects that combined both orientations of photographs. For the projects with inadequate distribution, the best results were obtained for oblique photographs with an average of 6.2 cm. However, for an adequate distribution, the best results were obtained for nadiral photographs with an average of 3.7 as opposed to 3.8 obtained for oblique photographs, or 4.6 cm for projects that combined both orientations.

#### *3.3. Vertical Distances between the Point Clouds Obtained by TLS and UAV Photogrammetry*

For all studied scenarios, the point clouds were evaluated based on reference data acquired by the TLS. There are several studies in scientific literature concerning the evaluation of vertical distances between clouds obtained by UAV photogrammetry and reference clouds. For example, [65] evaluated the capacity of UAV photogrammetry to obtain point clouds in quarries where there were walls with near-vertical inclination. In this study, they corroborated the improvement in the description of the break lines and in the accuracy of cross profiles with vertical walls, using oblique photographs and comparing the UAV photogrammetry clouds with the profiles obtained by a total station. One of their main conclusions was that the scenario with nadir photographs resulted in smoother geometry and a more gradual transition between vertical and horizontal surfaces than the scenario involving nadir and oblique photographs.

Figure 12 shows the vertical distances between the cloud obtained by TLS and the six UAV photogrammetry projects executed with 3 GCPs. For the projects with nadiral photographs and adequate distribution of GCPs, the M3C2-calculated vertical distances resulted in absolute values distributed as a Gaussian function with mean = −0.075 and standard deviation (SD) = 8.335 cm. For the project with nadiral photographs and inadequate distribution of GCPs, a mean distance of −7.806 and an SD of 13.586 cm were obtained. For the project with oblique photographs and adequate distribution of GCPs, an average distance of 0.068 and an SD of 6.406 cm were obtained. For the project with oblique photographs and inadequate distribution of GCPs, an average distance of −21.597 and an SD of 22.466 cm were obtained. For the project with nadiral and oblique photographs and adequate distribution of GCPs, a mean distance of 0.195 and an SD of 7.066 cm were obtained. For the project with nadiral and oblique photographs and inadequate distribution of GCPs, an average distance of −24.692 and an SD of 26.988 cm were obtained. For this scenario, with inadequate distribution, the use of oblique photographs increased the average absolute distance between the clouds by 47%. However, for an adequate distribution, there was a reduction of the average cloud distance by 30%.

*Remote Sens.* **2020**, *12*, 2221 16 of 23

**Figure 12.** UAV-photogrammetric projects with 3 GCPS (units in meters). (**a**) Nadiral-3GCPs-Yes; (**b**) Nadiral-3GCPs-Not; (**c**) Oblique-3GCPs-Yes; (**d**) Oblique-3GCPs-Not; (**e**) Nadiral&Oblique-3GCPs-Yes; (**f**) Nadiral&Oblique-3GCPs-Not. **Figure 12.** UAV-photogrammetric projects with 3 GCPS (units in meters). (**a**) Nadiral-3GCPs-Yes; (**b**) Nadiral-3GCPs-Not; (**c**) Oblique-3GCPs-Yes; (**d**) Oblique-3GCPs-Not; (**e**) Nadiral&Oblique-3GCPs-Yes; (**f**) Nadiral&Oblique-3GCPs-Not.

Figure 13 shows the vertical distances between the cloud obtained by TLS and the six UAV photogrammetry projects executed with 5 GCPs. For the project with nadiral photographs and adequate distribution of GCPs, an average distance of 0.492 and an SD of 7.236 cm were obtained. For the project with nadir photographs and inadequate distribution of GCPs, a mean distance of −3.015 and an SD of 7.501 cm were obtained. For the project with oblique photographs and inadequate distribution of GCPs, a mean distance of 1.109 and an SD of 6.092 cm were obtained. For the project with oblique photographs and inadequate distribution of GCPs, a mean distance of −1.418 and an SD of 6.303 cm were obtained. For the project with nadiral and oblique photographs and adequate distribution of GCPs, a mean distance of 1.361 and SD of 6.593 m were obtained. For the project with nadiral and oblique photographs and inadequate distribution of GCPs, a mean distance of −1.666 and SD of 6.661 cm were obtained. For this scenario, with inadequate distribution, the use of oblique photographs reduced the absolute average distance between the clouds by 25%, and by 9% for adequate distribution. Figure 13 shows the vertical distances between the cloud obtained by TLS and the six UAV photogrammetry projects executed with 5 GCPs. For the project with nadiral photographs and adequate distribution of GCPs, an average distance of 0.492 and an SD of 7.236 cm were obtained. For the project with nadir photographs and inadequate distribution of GCPs, a mean distance of −3.015 and an SD of 7.501 cm were obtained. For the project with oblique photographs and inadequate distribution of GCPs, a mean distance of 1.109 and an SD of 6.092 cm were obtained. For the project with oblique photographs and inadequate distribution of GCPs, a mean distance of −1.418 and an SD of 6.303 cm were obtained. For the project with nadiral and oblique photographs and adequate distribution of GCPs, a mean distance of 1.361 and SD of 6.593 m were obtained. For the project with nadiral and oblique photographs and inadequate distribution of GCPs, a mean distance of −1.666 and SD of 6.661 cm were obtained. For this scenario, with inadequate distribution, the use of oblique photographs reduced the absolute average distance between the clouds by 25%, and by 9% for adequate distribution.

adequate distribution.

*Remote Sens.* **2020**, *12*, 2221 17 of 23

**Figure 13.** UAV-photogrammetric projects with 5 GCPS (units in meters). (**a**) Nadiral-5GCPs-Yes; (**b**) Nadiral-5GCPs-Not; (**c**) Oblique-5GCPs-Yes; (**d**) Oblique-5GCPs-Not; (**e**) Nadiral&Oblique-5GCPs-Yes; (**f**) Nadiral&Oblique-5GCPs-Not. **Figure 13.** UAV-photogrammetric projects with 5 GCPS (units in meters). (**a**) Nadiral-5GCPs-Yes; (**b**) Nadiral-5GCPs-Not; (**c**) Oblique-5GCPs-Yes; (**d**) Oblique-5GCPs-Not; (**e**) Nadiral&Oblique-5GCPs-Yes; (**f**) Nadiral&Oblique-5GCPs-Not.

Figure 14 shows the vertical distances between the cloud obtained by TLS and the six UAV photogrammetry projects executed with 7 GCPs. For the project with nadiral photographs and adequate distribution of GCPs, an average distance of 0.325 and SD of 7.226 cm were obtained. For the project with nadiral photographs and inadequate distribution of GCPs, a mean distance of −0.637 and SD of 8.689 cm were obtained. For the project with oblique photographs and adequate distribution of GCPs, a mean distance of 0.704 and SD of 6.187 cm were obtained. For the project with oblique photographs and inadequate distribution of GCPs, a mean distance of 1.134 and SD of 6.616 cm were obtained. For the project with nadiral and oblique photographs and adequate distribution of GCPs, a mean distance of 1.130 and a SD of 6.579 cm were obtained. For the project with nadiral and oblique photographs and inadequate distribution of GCPs, a mean distance of 1.314 and SD of 7.143 cm were obtained. For this scenario, with inadequate distribution, the use of oblique photographs reduced the absolute average distance between the clouds by 18%, and by 11% for Figure 14 shows the vertical distances between the cloud obtained by TLS and the six UAV photogrammetry projects executed with 7 GCPs. For the project with nadiral photographs and adequate distribution of GCPs, an average distance of 0.325 and SD of 7.226 cm were obtained. For the project with nadiral photographs and inadequate distribution of GCPs, a mean distance of −0.637 and SD of 8.689 cm were obtained. For the project with oblique photographs and adequate distribution of GCPs, a mean distance of 0.704 and SD of 6.187 cm were obtained. For the project with oblique photographs and inadequate distribution of GCPs, a mean distance of 1.134 and SD of 6.616 cm were obtained. For the project with nadiral and oblique photographs and adequate distribution of GCPs, a mean distance of 1.130 and a SD of 6.579 cm were obtained. For the project with nadiral and oblique photographs and inadequate distribution of GCPs, a mean distance of 1.314 and SD of 7.143 cm were obtained. For this scenario, with inadequate distribution, the use of oblique photographs reduced the absolute average distance between the clouds by 18%, and by 11% for adequate distribution.

*Remote Sens.* **2020**, *12*, 2221 18 of 23

**Figure 14.** UAV-photogrammetric projects with 7 GCPS (units in meters). (**a**) Nadiral-7GCPs-Yes; (**b**) Nadiral-7GCPs-Not; (**c**) Oblique-7GCPs-Yes; (**d**) Oblique-7GCPs-Not; (**e**) Nadiral&Oblique-7GCPs-Yes; (**f**) Nadiral&Oblique-7GCPs-Not. **Figure 14.** UAV-photogrammetric projects with 7 GCPS (units in meters). (**a**) Nadiral-7GCPs-Yes; (**b**) Nadiral-7GCPs-Not; (**c**) Oblique-7GCPs-Yes; (**d**) Oblique-7GCPs-Not; (**e**) Nadiral&Oblique-7GCPs-Yes; (**f**) Nadiral&Oblique-7GCPs-Not.

#### **4. Discussion 4. Discussion**

adequate distribution of GCPs.

Considering that the accuracy error of measurements made with GNSS was about 8 for the horizontal plane and 15 mm vertically, the adjustment obtained for the TLS point cloud is considered adequate, since the average adjustment error obtained at the control points was 25 mm. In turn, the maximum cloud-to-cloud error found in the four-station registry was below 5 mm, similar to that obtained by [50,51], and much less than that reported by [52]. Analyzing the differences with respect to the seven theoretical profiles obtained by GNSS, a mean error of −0.030 and a standard deviation (SD) of 2.965 cm were found. Similar errors were obtained by [66], who reported three different experiments to obtain indirect georeferencing of TLS point clouds by using control lines. The results they found ranged from a mean error of 0.002–0.064 and a SD between 0.928 cm and 1.729 cm. Therefore, in view of the results obtained for the adjustment and georeferencing of the point cloud Considering that the accuracy error of measurements made with GNSS was about 8 for the horizontal plane and 15 mm vertically, the adjustment obtained for the TLS point cloud is considered adequate, since the average adjustment error obtained at the control points was 25 mm. In turn, the maximum cloud-to-cloud error found in the four-station registry was below 5 mm, similar to that obtained by [50,51], and much less than that reported by [52]. Analyzing the differences with respect to the seven theoretical profiles obtained by GNSS, a mean error of −0.030 and a standard deviation (SD) of 2.965 cm were found. Similar errors were obtained by [66], who reported three different experiments to obtain indirect georeferencing of TLS point clouds by using control lines. The results they found ranged from a mean error of 0.002–0.064 and a SD between 0.928 cm and 1.729 cm. Therefore, in view of the results obtained for the adjustment and georeferencing of the point cloud by TLS, it can be adopted as a reference cloud for subsequent comparison with projects obtained by UAV photogrammetry.

by TLS, it can be adopted as a reference cloud for subsequent comparison with projects obtained by UAV photogrammetry. Regarding the number of GCPs used for the georeferencing of photogrammetric projects, in this study, a clear trend of improved results was obtained by increasing the number of GCPs, with the only noteworthy change being for an inadequate distribution of the seven GCPs. This is Regarding the number of GCPs used for the georeferencing of photogrammetric projects, in this study, a clear trend of improved results was obtained by increasing the number of GCPs, with the only noteworthy change being for an inadequate distribution of the seven GCPs. This is approximately 50% less obvious with oblique photographs than with nadiral photographs, which demonstrates the need to incorporate oblique photographs when it is not possible to have an adequate distribution of GCPs.

approximately 50% less obvious with oblique photographs than with nadiral photographs, which demonstrates the need to incorporate oblique photographs when it is not possible to have an According to [62], the results of our study show similar values, where, for three GCPs and an inadequate distribution, an RMSE of 9.49 cm was obtained, while with an increase of GCPs and

improved distribution the RMSE decreased to 3.32 cm. This same trend is observed regardless of the orientation of the photographs.

In recent years, studies on the benefit of introducing oblique images or an adequate distribution of GCPs in UAV photogrammetry projects are common. However, there have been only few such studies applied to quasi-vertical or vertical walls. In [55], it is noted that for the network of self-calibrated images to be accurate, the spatial distribution of GCPs needs to cover the whole area of interest. In turn, the density of GCPs depends on the accuracy needed in the project, the geometry of the network, and the quality of the photographs. For example, to obtain accuracies of 5.0 cm, at a height of 100 m (GSD 23 mm), about 15 GCPs were needed, with an average minimum spacing of 50 m. They pointed out that improving the network geometry with oblique images or with a precisely pre-calibrated camera could reduce the number of GCPs. These results coincide with those obtained in this study, especially when the distribution of GCPs was inadequate. However, when the number of GCPs was increased to five or seven, and the distribution was adequate, no improvement was found with the use of oblique photographs. For projects with inadequate distribution, the use of nadir photographs dramatically worsens the results. In this case, nadir photographs do not have such a strong effect when combined with oblique photographs. In the case of adequate distribution, the opposite happens; the best results are obtained with nadir photographs, while in this case, oblique photographs slightly worsen the results.

As noted in [67], the accuracy does not only depend upon the number of GCPs, but also on their distribution pattern. Therefore, the choice of a suitable pattern and the number of GCPs for a particular mission can help obtain sufficiently accurate results with economic feasibility. In their study, the accuracy ranged from 18.2 with three GCPs to 11.3 cm with nine GCPs. They also highlighted the importance of positioning a GCP in the central region of the area. There was a substantial improvement in accuracy due to the addition of another GCP in the central area. Our study has improved those results, where, for three GCPs and an adequate distribution, the average error was 4.8 cm. In [42], the authors studied the acquisition and use of oblique images for the 3D reconstruction of a historical building, obtained by UAV for the realization of a high-level-of-detail architectural survey. For this purpose, they used several software applications, obtaining similar results, in which the differences with respect to a reference cloud ranged from 0.3–2.5 to 1.6–3.9 cm. As demonstrated in [68], the use of oblique images obtained from a low-cost UAV system and processed by SfM software was an effective method for surveying cultural heritage sites. In particular, they studied a facade, in which they obtained an average distance between the clouds (UAV vs TLS) of 4.0 cm, with an SD of 9.0 cm. Similar results were found in our study, where, for an adequate distribution of GCPs, the mean value ranged from −0.1 to 1.4 cm with an SD ranging from 6.1 to 8.3 cm.

With respect to the break lines referred to by [65], this fact can be visually corroborated with reference to Figures 11–13, wherein the major differences can be seen in the edges, at which abrupt changes of slope occur. According to [51], in our study, evaluating the vertical distances between the UAV photogrammetry cloud and the TLS cloud, the use of oblique photographs improved results by between 9% and 30% for all scenarios except three GCPs and inadequate distribution.

#### **5. Conclusions**

The SfM analysis of the UAV images is a valuable and verified tool for surveyors interested in the high-resolution reconstruction of nearly vertical walls. UAV studies are also a useful tool for risk management in accessing hazardous and inaccessible areas, such as in cases of slope cuttings on highways or roads with near vertical inclination or the analysis of architectural facades in cities. This study has addressed the influence of three factors: The number of GCPs, the distribution of GCPs, and the orientation of photographs obtained by the UAV. From the results obtained, a series of guidelines can be established to simplify the process of capturing data in the field and to improve the accuracy of photogrammetric products. In order to obtain optimal results with respect to accuracy, it is important to distribute of as many GCPs as possible throughout the study area and to make use of an

adequate distribution GCPs coverage, as far as possible, over the entire area of interest. In case these two conditions are difficult to meet, the inclusion of oblique photographs in the photogrammetric project results in a substantial improvement in the results obtained. In this way, it is possible to achieve an RMSE of around 3.0 cm, which is a sufficient scale for most engineering or architectural projects. In addition, oblique photographs ostensibly improve the geometric description of break lines or sudden changes in slope. Therefore, it is always advisable to use them for projects with vertical topography. Under these circumstances, UAV photogrammetry constitutes a technique whose results are equivalent to those obtained by a TLS, but with the incentive of lower cost and greater facility for the treatment of the point cloud.

**Author Contributions:** The contributions of the authors were: Conceptualization, P.M.-C., F.C.-R. and F.A.-V.; methodology, P.M.-C., F.C.-R. and F.A.-V.; software P.M.-C.; validation, P.M.-C.; formal analysis, P.M.-C., F.C.-R. and F.A.-V.; investigation, P.M.-C., F.C.-R. and F.A.-V.; resources, F.C.-R. and F.A.-V.; writing—original draft preparation, P.M.-C.; writing—review and editing, P.M.-C., F.C.-R. and F.A.-V.; visualization, P.M.-C.; project administration, F.C.-R. and F.A.-V. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
