Next Article in Journal
TextRS: Deep Bidirectional Triplet Network for Matching Text to Remote Sensing Images
Previous Article in Journal
Dynamic Channel Selection of Microwave Temperature Sounding Channels under Cloudy Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Impact of UAS Image Orientation on Accuracy of Forest Inventory Attributes

1
Division for Forest Management and Forestry Economics, Croatian Forest Research Institute, Trnjanska cesta 35, HR-10000 Zagreb, Croatia
2
Chair of Photogrammetry and Remote Sensing, Faculty of Geodesy, University of Zagreb, Kačićeva 26, HR-10000 Zagreb, Croatia
3
School of Earth, Environment and Society, Bowling Green State University, 190 Overman Hall, Bowling Green, OH 43403, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(3), 404; https://doi.org/10.3390/rs12030404
Submission received: 19 December 2019 / Revised: 10 January 2020 / Accepted: 24 January 2020 / Published: 27 January 2020
(This article belongs to the Section Forest Remote Sensing)

Abstract

:
The quality and accuracy of Unmanned Aerial System (UAS) products greatly depend on the methods used to define image orientations before they are used to create 3D point clouds. While most studies were conducted in non- or partially-forested areas, a limited number of studies have evaluated the spatial accuracy of UAS products derived by using different image block orientation methods in forested areas. In this study, three image orientation methods were used and compared: (a) the Indirect Sensor Orientation (InSO) method with five irregularly distributed Ground Control Points (GCPs); (b) the Global Navigation Satellite System supported Sensor Orientation (GNSS-SO) method using non-Post-Processed Kinematic (PPK) single-frequency carrier-phase GNSS data (GNSS-SO1); and (c) using PPK dual-frequency carrier-phase GNSS data (GNSS-SO2). The effect of the three methods on the accuracy of plot-level estimates of Lorey’s mean height (HL) was tested over the mixed, even-aged pedunculate oak forests of Pokupsko basin located in Central Croatia, and validated using field validation across independent sample plots (HV), and leave-one-out cross-validation (LOOCV). The GNSS-SO2 method produced the HL estimates of the highest accuracy (RMSE%: HV = 5.18%, LOOCV = 4.06%), followed by the GNSS-SO1 method (RMSE%: HV = 5.34%, LOOCV = 4.37%), while the lowest accuracy was achieved by the InSO method (RMSE%: HV = 5.55%, LOOCV = 4.84%). The negligible differences in the performances of the regression models suggested that the selected image orientation methods had no considerable effect on the estimation of HL. The GCPs, as well as the high image overlaps, contributed considerably to the block stability and accuracy of image orientation in the InSO method. Additional slight improvements were achieved by replacing single-frequency GNSS measurements with dual-frequency GNSS measurements and by incorporating PPK into the GNSS-SO2 method.

Graphical Abstract

1. Introduction

Forest inventory provides critical information about the condition and status of forest resources, which is important for the implementation of sustainable forest management. Mapping forest resources is possible with the use of remote sensing technology through continuous data acquisition and advanced statistical analysis. Although widely used, remote sensing-based research remains challenging due to different properties of the sensors and the structural complexity of forests [1,2]. Over the last twenty years, Airborne Laser Scanning (ALS), which uses Light Detection and Ranging (LiDAR) technology, has been confirmed as one of the most advantageous and efficient remote sensing technologies in monitoring forest structure and estimating accurate forest inventory attributes [3,4,5,6,7]. However, in many countries worldwide, ALS technology has not been used, primarily due to its high acquisition costs, in which case other remote sensing techniques must be utilized. Fortunately, advances in computer technology and the availability of low-cost aerial imagery have resulted in a large number of studies [8,9,10,11,12,13], which demonstrates the great potential of Digital Aerial Photogrammetry (DAP) for forest inventory. In particular, DAP based on images collected by Unmanned Aerial Systems (UASs) has attracted great attention in recent years, as a substitute for or a complementary method to ALS [14,15,16,17,18]. Although the flight endurance and size of a surveyed area limit the performance of UASs compared to traditional technologies from manned aircraft [19], UASs enable flexible multi-temporal data collection, very high spatial resolution, and a considerably lower cost [20,21,22]. Therefore, UAS-based photogrammetry can be considered as a cost-effective alternative to both ALS and traditional DAP for forest inventory of small areas [14].
The interest in UAS photogrammetry has increased significantly thanks to the development of Structure from Motion (SfM) algorithms [14,20,22,23,24,25,26,27,28] which have considerably improved and facilitated demanding processing tasks of image block orientation (also called sensor orientation or image geo-referencing). This straightforward and easy-to-use automatic process, however, may lead to inaccurate final products and errors that are commonly undetectable by a user [29,30,31].
The quality and accuracy of products derived using SfM and Dense Image Matching (DIM) algorithms greatly depend on the methods used to define image block orientations before they are used to create 3D point clouds (PCs). Benassi et al. [32] defined four major image block orientation methods in aerial photogrammetry: Indirect Sensor Orientation (InSO), Direct Sensor Orientation (DSO), Integrated Sensor Orientation (ISO), and Global Navigation Satellite System (GNSS)-supported Sensor Orientation (GNSS-SO). InSO is performed with tie-points obtained from SfM and field-measured Ground Control Points (GCPs). DSO is based on Exterior Orientation (EO) parameters (camera position and attitude) collected using an on-board GNSS receiver and Inertial Measurement Unit (IMU). This technique requires the least amount of field work and processing, but the quality of the collected EO parameters, such as camera positions and attitudes, depends on many factors, including the quality of the GNSS and IMU sensors, the flying speed, the satellite constellation and the synchronization between the camera, GNSS and IMU sensors [33,34]. ISO presents a combination of both InSO and DSO, and is performed using tie-points, EO parameters (camera position and attitude) and GCPs. Similar to ISO, GNSS-SO is performed using tie-points, GCPs, and the positions of cameras collected with on-board GNSS, with the exception of attitude measurements. This method is commonly utilized when the on-board IMU is not accurate enough. The use of GCPs in both ISO and GNSS-SO is preferable but optional.
Consumer-grade UASs are often equipped with inexpensive but less accurate GNSS and IMU systems, and, because of that, InSO has become the most commonly-used method to perform image block orientation [32]. However, for better results, the accuracy of EO parameters can be improved during the flight using the Real-Time Kinematic (RTK) technique or using the Post-Processed Kinematic (PPK) technique after the flight. To obtain the high positional accuracy of GCPs, the RTK technique is usually applied by measuring GCPs with GNSS receivers connected to a nearby base station or to a network of GNSS reference stations. Besides the accuracy of GCPs measurements, the quality of InSO depends on the number and distribution of GCPs. Sanz-Albanedo et al. [35] demonstrated that a high accuracy of InSO in large-scale projects can only be achieved by using a high number (>3 GCPs per 100 images) of evenly distributed GCPs over an area of interest.
Several studies evaluated and compared the accuracy of UAS images, whose orientation was defined using different image block orientation methods [32,36,37]. However, these studies were conducted in non-forested areas [32,36] or in areas just partially covered with forest vegetation [37], which allowed for the GCPs and checkpoints to be marked in open areas. To date, a very limited number of studies have evaluated the spatial accuracy of UAS products derived by using different image block orientation methods in forested areas [38,39]. Gašparović et al. [38] evaluated the vertical accuracy of the Digital Surface Model (DSM) generated from UAS images collected with the low-cost UAS (DJI Phantom 4 Pro) over a dense forested area. When GNSS-SO approaches with no GCPs and with seven irregularly distributed GCPs were compared, a considerable improvement in the DSM vertical accuracy with GCPs was observed. Gašparović et al. [38] concluded that DSMs generated using GNSS-SO without GCPs can be used for visualization and monitoring purposes, whereas DSMs generated using GCPs have greater potential to be used in forest inventory. Another recent study that evaluated the spatial accuracy of UAS products using different image block orientation methods in a forested area was conducted by Tomaštik et al. [39]. Namely, GNSS-SO using the PPK dual-frequency carrier-phase GNSS data collected with eBee Plus and without GCPs was compared to two InSO approaches, one with four and another with nine GCPs. Contrary to the previously mentioned studies, Tomaštik et al. [39] obtained a significantly higher horizontal accuracy for the GNSS-SO (PPK) method compared to both InSO approaches, and a significantly higher vertical accuracy for the GNSS-SO (PPK) method compared to InSO with four GCPs. The vertical accuracy for GNSS-SO and for InSO with nine GCPs did not differ significantly.
Besides the studies of Gašparović et al. [38] and Tomaštik et al. [39], who focused on DSM and PCs, to the best of our knowledge no prior studies have examined the accuracy of forest inventory attributes estimated using UAS products generated by using different image block orientation methods. The main goal of this study is to investigate the impact of several image block orientation methods on the accuracy of estimated forest attributes, with a particular emphasis on the plot-level mean tree height (Lorey’s mean height). Although the accuracy of the plot-level mean tree height estimates was evaluated in a number of recent UAS-based studies [14,40,41,42,43,44,45], the influence of image block orientation methods on the reported results has not been examined nor discussed. In this study, three image block orientation methods are used and compared: (a) InSO method with five irregularly distributed GCPs; (b) GNSS-SO method using non-PPK single-frequency carrier-phase GNSS data (hereafter referred to as GNSS-SO1); and (c) using PPK dual-frequency carrier-phase GNSS data (hereafter referred to as GNSS-SO2). Since previous studies [32,35,36] have demonstrated an improvement in image orientation and vertical accuracy of generated products (PCs, DSMs) when GCPs were used, both GNSS-SO approaches are supported with five GCPs in the present study. In combination with field forest inventory data, plot-level metrics from UAS PCs obtained using the three different methods (InSO, GNSS-SO1, GNSS-SO2) are used to generate the plot-level mean tree height models which are then compared and evaluated using field ground-truth data. Two validation approaches are used: (1) hold-out validation (HV), i.e., field validation across independent sample plots, and (2) the leave-one-out cross-validation (LOOCV) statistical technique. The latter is commonly used for smaller datasets [46,47] when there are not enough measurements to divide them into modeling and validation subsets.

2. Materials

2.1. Study Area

The study was conducted in part of the state-owned, actively managed forests of Pokupsko basin forest complex, located in Central Croatia (Figure 1). The main forest type of this area is even-aged pedunculate oak (Quercus robur L.) forests of different age classes ranging from 0 to 160 years. These oak stands are mainly mixed with other tree species (Carpinus betulus L., Alnus glutinosa (L.) Geartn., and Fraxinus angustifolia Vahl.), and with understory vegetation (Corylus avellana L. and Crataegus monogyna Jacq.). The study area is characterized by flat terrain with ground elevations ranging from 108 to 113 m a.s.l.

2.2. Field (Ground-Truth) Data

The field data from a total of 99 circular sample plots were collected in 2017 (Figure 1). The plots were systemically distributed (100 m × 100 m, 100 m × 200 m, 200 m × 100 m, 200 m × 200 m) throughout 20 forest stands of different age classes and with a total area of 363.54 ha. Depending on the stand age and stand density, the sample plots had a radius of 8, 15, or 20 m. The coordinates of the plot centres were measured during leaf-off conditions using the GNSS RTK method. The receiver Stonex S9IIIN, connected to the Croatian network of GNSS reference stations called Croatian Positioning System (CROPOS) and Very Precise Positioning System (VPPS) service, was used. CROPOS VPPS service, based on the Virtual Reference Station (VRS) technique, provides real-time corrections with up to 2 cm horizontal and 4 cm vertical accuracies [48]. Despite the method and service used, only 40% of the plot centres had a fixed solution and 60% of the plot centres had a float solution. The average precisions of the positioning, i.e., standard deviations (SDs) provided by the receiver, were 0.038 m and 0.159 m for fixed and float solutions, respectively.
The plot data were collected by recording the tree species and measuring the diameter at breast height (dbh) for all trees with dbh ≥ 10 cm. Tree height was measured for 1908 trees or 73% of all sampled trees in the study area using a Vertex III hypsometer. In each plot, the trees sampled for height measurements included all available species and a wide dbh range. According to the extensive research of Stereńczak et al. [49], the relative errors for tree height measurements using a Vertex instrument are 2.52% and −1.25% for Q. robur and A. glutinosa, respectively. Other tree species (C. betulus, F. angustifolia) found in the study area of the present research were not in the scope of the research by Stereńczak et al. [49], but it can be assumed that expected errors do not significantly deviate from those reported for Q. robur and A. glutinosa. For trees without height measurements, tree heights were estimated using the developed local species-specific dbh-height models fitted with Michailloff’s function [50]. Despite the fact that field tree height measurements and the use of dbh-height models produce certain errors, the field data were considered as ground-truth for the validation of UAS data, as the method was common in many other remote sensing studies, e.g., [14,15,21,23,24,25,26,27,28,32]. Lorey’s mean height (HL) of each plot was calculated using Equation (1):
H L = h i × g i G p l o t ,
where hi is the tree height of a single (i) tree at the plot (in m), gi is the basal area of an ith tree at the plot calculated from measured dbh (in m2), and Gplot the plot basal area (in m2).
A summary of the main forest attributes within the surveyed plots is presented in Table 1. Out of 99 sample plots, 44 were located in oak stands of age class 3 (41–60-year-old), 17 in stands of age class 4 (61–80-year-old), 8 in stands of age class 5 (81–100-year-old), and 30 plots in stands of age class 7 (>121-year-old). The irregular distribution of sample plots per age class is a consequence of the irregularity of age classes in the study area. Moreover, it can be noted that stands of age class 6 (101–120-year-old) are not present in the study area.

2.3. UAS Data

The UAS survey, which included GCPs measurement and image acquisition, was conducted during leaf-on conditions on 30 May 2017. Prior to image acquisition, 7 GCPs were signalized with 40 × 40 cm markers and surveyed across the study area. The GCPs were surveyed using a Trimble R7 GNSS receiver connected with the CROPOS VPPS service, which ensured horizontal and vertical measurement precision of SD ≤ 10 cm. The distribution of the GCPs was somewhat irregular (Figure 1) due to a very dense forest structure and lack of open areas. In the processing phase, gross errors were detected for 2 GCPs and they were excluded from further processing.
The UAS images were collected using a fixed-wing Trimble UX5 HP equipped with a Sony ILCE-7R camera (Table 2). The main advantage of the UAS used is its ability to record dual-frequency (L1 and L2 frequency) GNSS carrier-phase data. Both single-frequency and dual-frequency GNSS data were recorded using the same on-board GNSS receiver. Single-frequency data were recorded during flight in metadata files of each image. Dual-frequency data were recorded in RINEX (Receiver Independent Exchange Format) files. In total, 458 and 475 images with high overlap (90% endlap, 80% sidelap) [51] were collected during the first and the second flights, respectively (Figure 1). The average flying height was 600 m above ground level, resulting in a Ground Sampling Distance (GSD) of about 8 cm. The UAS survey was conducted by a private company and GSD was selected to fulfil the requirements of the project given the large study area and cost constraints.

2.4. Digital Terrain Model (DTM) Data

To normalize the PCs of the UAS, a combination of an airborne LiDAR Digital Terrain Model (DTMLiD) and an improved version of the official Croatian DTM (DTMPHM) with a spatial resolution of 0.5 m was used. Namely, the minor, northern part of the study area (31 plots) was not covered with the DTMLiD, and therefore, an improved version of DTMPHM was used to cover this part of the area.
DTMLiD was generated from airborne LiDAR data collected with an Optech ALTM Gemini 167 scanner during summer 2016. The resulting point density was 13.64 points∙m−2. The “ground” points were classified using TerraSolid (version 11) software [52]. From the classified ground points (0.91 points∙m−2), a raster DTMLiD with a spatial resolution of 0.5 m was generated. It was provided by Hrvatske Vode Ltd. (Zagreb, Croatia), while both data acquisition and image processing were done by the Institute for Photogrammetry Inc. (Zagreb, Croatia) and Mensuras Ltd. (Maribor, Slovenia).
DTMPHM was generated from official national digital terrain data consisting of 3D line (breaklines, formlines) and point (spot heights, mass points) data. These data were primarily obtained from manual stereo photogrammetric methods using aerial images with a ground sampling distance of ≤ 30 cm, supported with the vectorization of existing topographic maps and field data collection. To eliminate eventual gross errors from the terrain data, a method proposed by Gašparović et al. [53] was used. Finally, DTMPHM in the raster format with a spatial resolution of 0.5 m was created from the ‘error-free’ digital terrain data using Global Mapper software (ver. 19, Blue Marble Geographics, Hallowell, Maine, USA). A detailed description regarding data characteristics and processing as well as accuracy analyses for both DTMLiD and DTMPHM can be found in the studies of Balenović et al. [54] and Gašparović et al. [53].

3. Methods

The simplified methodological workflow used in this research is presented in Figure 2.

3.1. UAS Image Orientation

Within this research, three different methods of UAS image block orientation were applied and evaluated (Figure 2):
  • The InSO method based on image tie-points and 5 irregularly distributed GCPs;
  • The GNSS-SO1 method based on tie-points, 5 GCPs and non-PPK single-frequency carrier-phase GNSS data (absolute positioning);
  • The GNSS-SO2 method based on tie-points, 5 GCPs and using PPK dual-frequency carrier-phase GNSS data (relative positioning).
The single-frequency GNSS data used for the GNSS-SO1 method were recorded in image metadata and did not require any additional pre- or post-processing. Dual-frequency GNSS data used for the GNSS-SO2 method were recorded in RINEX file, which further enabled the processing and application of relative GNSS positioning. Prior to image orientation, the PPK procedure was carried out in the Trimble Business Center v4.0 software. VRS, generated by the CROPOS Geodetic Precise Positioning Service (GPSS), was used for processing. All GNSS positions relating to the exposure moments (camera stations) of both flights were solved with the fixed solution. The differences between the single- and dual-frequency exposure position estimates for the first and second flights are presented in Figure 3, whereas Table 3 shows basic statistics for the above-mentioned differences. Both Figure 3 and Table 3 indicate that the offset of single-frequency measurements in respect to dual-frequency measurements is consistent for the N and h axes, while differences on the E axis show a significantly larger amount of both differences and dispersion. The main reasons for that could be the imaging method, navigation system synchronization, and flight direction. The direction of successive flight lines was from east to west and from west to east, and the ordinal numbers of the last and first images within each flight line correspond to the “jumps” on the Figure 3 E axis differences. Both datasets were collected with the same UAS; however, shutter synchronizations were carried out in a different way. Single-frequency measurements are primarily used for the navigation of the UAS, hence precise synchronization of the camera shutter is not mandatory. Therefore, methods of synchronization associated with single-frequency measurements usually have a lower accuracy. One of the common methods of synchronization is logging the trigger signal message, but this method only logs the moment when the trigger command is sent to the camera. The exact moment of the sensor exposure is not logged, nor is the delay between the trigger signal and moment of exposure constant. In this specific case, considering that the cruise speed of the UAS used was 24 km·h−1 and the mean absolute error was ≈ 3 m on the E axis (Table 3), the average delay was ≈ 0.12 s. The method of synchronization used with the dual-frequency measurements is based on logging the exact moment of the sensor exposure via flash mount. Flash is very tightly synchronized with the shutter release and connecting it to the GNSS receiver allows us to directly log the sensor exposure moment in the GNSS observation RINEX file. This method of synchronization has a very small delay and it is stable in time [55]. Another possible cause or at least contributor to the detected error behavior is rolling shutter. The camera used employs a rolling shutter, which contributes to errors in the flying direction; however, rolling shutter can be, and was, fitted in the adjustment [56].
Photogrammetric processing, i.e., the image orientation for all three methods (InSO, GNSS-SO1, GNSS-SO2), was conducted using Agisoft PhotoScan v1.4.3 software [57]. The a priori accuracies of the GCPs’ were estimated by the GNSS receiver, and were set to 10 cm and 17 cm in the horizontal and vertical directions, respectively. The inaccuracy of the estimated positions, atypical for the RTK method, is mostly due to low-performance receiver hardware, accompanied with the unfavorable environment (leaf-on condition) for GNSS measurements. The accuracies of the exposure station positions obtained with single-frequency measurements were set to 3.8 m. The a priori accuracy was estimated in reference to the dual-frequency position measurements. Exposure station positions obtained with the dual-frequency measurements had a priori accuracy of 4 and 8 cm in the horizontal and vertical directions, respectively.

3.2. UAS Point Cloud Generation

After the image block orientation using three different methods (InSO, GNSS-SO1, GNSS-SO2), the Agisoft PhotoScan (v1.4.3) DIM algorithm [57] was applied to generate three different dense point clouds (PCs). Preliminary tests [58] showed that for images with the technical characteristics used in this research, the best results in HL estimation were obtained with the level 1 image pyramid ("High" parameter selection). This means that images were firstly resampled to half of the original resolution on both axes, and thus GSD of 16 cm was used in the DIM procedure. Downsampling the images reduced the number of pixels by a factor of 4, which significantly increased the processing speed of the DIM algorithm. The resulting density of the generated PCs was 72.32 points·m−2, on average, due to the flight plan, flying height, and image downsampling. This is in agreement with several studies [14,17,41,43,44], which showed that lower point density could be successfully used to estimate forest variables at plot levels.

3.3. Extraction and Calculation of Point Cloud (PC) Metrics

The PCs were normalized with the DTM combined from DTMLiD and improved DTMPHM. The normalized PCs were then clipped to the extent of the plots area, and various metrics were extracted for each plot using the FUSION LDV v3.80 open source software [59]. In accordance with other similar studies [3,5,7,43], the minimum height threshold of 2 m was applied to remove ground and understory vegetation (shrubs and trees with dbh≤10 cm). Furthermore, the height thresholds of 5, 10, 15, 20, and 25 m were applied to calculate the additional density (canopy) cover metrics. A total of 47 metrics were extracted and grouped into height metrics, height variability metrics, height percentiles, and canopy cover metrics (Table 4). For more details on extracted metrics, please refer to McGaughey [59].

3.4. Generation and Validation of Lorey’s Mean Height (HL) Models

Out of a total of 99 circular sample plots, 66 plots were selected for the models’ generation and 33 plots for the models’ validation (HV validation approach). The plots were listed according to age, from youngest to oldest, and every third plot was selected in the validation dataset.
All extracted plot-level PC metrics were considered in statistical modelling (multivariate linear regression) as potential explanatory (independent) variables, while HL calculated from field ground-truth data (Equation (1)) was used as a response variable. Prior to the modelling, the large number of potential explanatory variables was reduced by applying a two-step pre-selection approach [43]. Firstly, the number of potential variables was reduced based on the Pearson correlation coefficient (r). All variables that had a low correlation (r < ±0.5) with the observed HL were excluded. The remaining variables were then included in the second step—the in-group collinearity analysis. Within each group separately, the correlation (r) between variables was calculated. Variables that showed high within-group collinearity (r ≥ ±0.7) and those that exhibited lower correlation with HL were excluded one by one, while the remaining variables (at least one per group) entered the multivariate linear regression. By applying a backward stepwise approach, the-best-fit HL model was selected for each PC, i.e., for PCs produced from UAS images, whose orientation was generated using the three proposed methods (InSO, GNSS-SO1, GNSS-SO2).
The models were validated to evaluate the accuracy of each HL model by performing:
  • The validation over the independent validation dataset of 33 plots (HV), which was not used to derive the models. The HL estimates from the developed models were compared with corresponding field data and evaluated by means of adjusted coefficients of determination (R2adj), mean error (ME) (Equation (2)), relative mean error (ME%) (Equation (3)), root mean square error (RMSE) (Equation (4)), and relative root mean square error (RMSE%) (Equation (5)):
    M E =   i = 1 n ( H L i   H L ) n
    M E % =   M E H L ¯ × 100
    R M S E =   i = 1 n ( H L i   H L ) 2 n
    R M S E % =   R M S E H L ¯ × 100
    where H L i is the predicted (UAS estimated) Lorey’s mean height of plot i, H L is the observed (from field data) Lorey’s mean height of plot i, n is the number of plots, and H L ¯ is the mean of the observed values.
  • The LOOCV statistical method [46,47], based on 66 sample plots, was used for the model’s development. LOOCV is the iterative procedure of n iterations, where n is the number of all measurements (field plots). In n iterations (n = 66), one measurement was removed from the dataset and the selected model wasfitted using the remaining n-1 measurements. The model was then validated using the removed measurement. After the process of n-1 iterations was done, the model accuracy was estimated by averaging validation results (residuals) from all iterations.
A two-step pre-selection procedure, the backward stepwise regression, and the independent dataset validation were performed using STATISTICA 11 software [60], while MATLAB R2018a and freeware MATLAB RSR toolbox [61] were used to perform LOOCV.

4. Results

After generating the UAS images’ orientation with three methods (InSO, GNSS-SO1, GNSS-SO2), dense PCs were generated for each image orientation outcome. Vertical differences between InSo and GNSS-SO2 PCs, as well as between GNSS-SO1 and GNSS-SO2 PCs for two selected (exemplary) plots, are presented in Figure 4. The GNSS-SO2 PC was used as a reference since it was expected to have the strongest geometry of the image orientation in comparison to InSO and GNSS-SO1 based PCs. Out of two selected exemplary plots, one plot (marginal plot) was located far from the GCPs (Figure 4a,b), while the second one (central plot) was located in the vicinity of the GCPs (Figure 4c,d). As expected, considerably larger vertical differences were observed for the marginal plot, due to the higher distance of the marginal plot from GCPs as well as to the irregular distribution of GCPs (Figure 1), which leads to vertical distortion after the image block orientation methods are applied, especially in the areas without GCPs (Figure 4a,b). Furthermore, for both marginal and central plots, larger vertical differences can be observed between InSO and GNSS-SO2 PCs (Figure 4a,c) than between GNSS-SO1 and GNSS-SO2 PCs (Figure 4b,d). This confirms that despite their lower accuracy, the single-frequency GNSS measurements contribute to the block stability and accuracy of image orientation.
The plot-level metrics were extracted and calculated, as shown in Table 4, and the same statistical procedure was applied to generate HL estimation models (Table 5). For each PC, the best-fit plot-level HL linear models were developed based on the backward stepwise regression of pre-selected variables (plot-level PC metrics; independent variables) and field HL (dependent variable) in the selected 66 sample plots (Table 5). All models and their parameters show high statistical significance (p < 0.01).
The modeling results obtained from the three models proposed in Table 5 show small differences in their performance (Table 6, Figure 5). The differences between models are less than 0.02% and 0.6% for R2adj and RMSE%, respectively, whereas the ME% differences between all models are less than 0.001%. Furthermore, the high R2adj values (>0.943), low RMSE% (<4.488%), and ME% (<0.001%) values indicate a good model fit for all three models. However, the models differ by number and type of included independent variables (predictors) (Table 5). The GNSS-SO2 based model, which includes two predictors, exhibits the best fit to data when compared to InSO- and GNSS-SO1-based models, which include three and four predictors, respectively.
Both validation approaches (HV and LOOCV) show the consistent trend of models’ performance (Table 6, Figure 5). The best performing is the GNSS-SO2-based model, followed by the GNSS-SO1- and InSO-based models.
For HV (33 independent validation plots), the models exhibit just slightly lower R2adj values and slightly higher RMSE% and ME% values than for the modeling dataset (66 plots). The R2adj values between the modeling dataset and HV differ by 0.022, 0.028, and 0.021 for InSO-, GNSS-SO1-, and GNSS-SO2-based models, respectively. The differences between the RMSE% values for two datasets range from 1.063% to 1.295%, whereas differences between the ME% values range from 0.259% to 0.643%. A good agreement, i.e., a small difference between the validation results for the modeling and HV datasets, indicates that the derived models provide reliable HL estimates in the study area of lowland pedunculate oak forests.
As expected, LOOCV produces better validation results than HV, of which the results are very similar to the results of the modeling dataset. The R2adj values between the modeling dataset and LOOCV differ by only 0.006, 0.004, and 0.002 for InSO-, GNSS-SO1, and GNSS-SO2-based models, respectively. The differences between the RMSE% values range from 0.169% to 0.356%, whereas the differences between the ME% values range from 0.004% to 0.056%.

5. Discussion

To date, a number of recent studies have confirmed the great potential of UAS photogrammetry in forest inventory, at both individual tree- [16,18] and plot-levels [14,40,41,42,43,44,45]. In general, the quality of the obtained forest inventory data depends on the quality and accuracy of the UAS products (e.g., PCs, DSMs) and DTM, while the quality of the UAS products depends on the image block orientation method applied to UAV data, among other factors such as flight characteristics and target properties [62].
The innovative component of the present study is that it focuses on the impact of widely-used image block orientation methods on not just UAS products but rather on the accuracy of forest attribute estimates. By adding this additional step to explore the performance of these methods on plot-level mean tree height (HL), we explore their impact on the final remote sensing product directly used in forest inventory.
Overall, the best performing of the models is the GNSS-SO2-based model (RMSE%: HV = 5.18%, LOOCV = 4.06%), followed by the GNSS-SO1-based model (RMSE%: HV = 5.34%, LOOCV = 4.37%), while the worst performing is the InSO-based model (RMSE%: HV = 5.55%, LOOCV = 4.84%). In other words, the GNSS-SO2 orientation method introduces the least errors in the estimation of the plot-level mean tree height (HL). The GNSS-SO1 method performs better than the InSO orientation method alone. The order of performance is as expected; however, the negligible differences in the performances of the regression models suggest that the image block orientation methods used in this study have no considerable effect on the estimation of forest attribute HL, which is somewhat surprising. This indicates that despite the low number of GCPs and their irregular distribution throughout the study area, the GCPs, as well as the high overlap (90%, 80%) of UAS images, contribute considerably to the block stability and accuracy of image orientation in the InSO method, and consequently have a positive impact on the geometric accuracy of the PCs. Despite their lower accuracy, the addition of single-frequency GNSS measurements to tie-points and GCPs slightly improves the accuracy of image orientation (Figure 4) and HL estimates (Table 6, Figure 5). Additional slight improvements in image orientation and HL estimation are achieved by replacing single-frequency GNSS measurements with dual-frequency GNSS measurements and by incorporating PPK into the GNSS-SO2 method (Table 6, Figure 5). It can be assumed that homogeneous forest structure and flat terrain in the study area also contribute to the small differences in regression models’ performances. In areas with more heterogeneous forest structure and more complex terrain, greater differences among the results obtained with different image orientation methods could be expected. To achieve greater block stability and geometric accuracy of PCs, especially when applying the InSO method, additional cross-flights at different flying heights should be considered, particularly in more complex forest areas. Nevertheless, when using GNSS-SO (PPK) the observed EO positions contribute greatly to the block stability and parameter decorrelation, suggesting that the trade-off between the cost of additional cross-flights and accuracy improvements in forest variable estimation is justified.
In general, the results of this study are in agreement with the findings of recent studies [14,40,41,42,43,44,45] which reported that plot-level tree heights (e.g., dominant height; Lorey’s height) can be estimated with high accuracy. Namely, the RMSE values for Lorey’s mean height ranged from 4.24% to 17.87% [14,40,41,42,43,44,45]. However, the estimates of Lorey’s mean height in this study have attained higher accuracy than in most previous studies [14,40,41,42,43,44]. Slightly higher accuracy (RMSE = 4.24%) was reported only by Shen et al. [45] compared to the InSO and GNSS-SO1 results of this study, while the GNSS-SO2 results of this study are of higher accuracy (RMSE = 4.06% for LOOCV) than reported in Shen et al. [45]. When applying the InSO orientation method, Shen et al. [45] used 30 GCPs for 45 plots. Within this study, only five GCPs were used to cover an area with 99 plots. Furthermore, the research of Shen et al. [45] was conducted in a ginkgo plantation, which is a considerably simpler forest type in terms of species composition and tree height variations than the mixed pedunculate oak forest. In addition to PC metrics derived from RGB images, Shen et al. [45] used spectral metrics derived from multispectral images to fit regression models and estimate HL. It should be emphasized that almost all previous studies [14,40,41,42,44,45] applied the InSO approach. The only exception is the study of Balenović et al. [43], which preceded this research, where the GNSS-SO2 method was applied on a similar dataset, but with a different research aim. Any direct comparison with other studies is difficult, as reported accuracies could depend on many other factors such as forest structure and site characteristics, UAS and camera specifications, flight conditions, photogrammetric software, type of used predictors, and modeling and validation strategies.
The present study considers the performance of two validation methods, HV and LOOCV. Ground sampling design, as part of every field-based validation process, plays an important role in remote sensing studies. It must include a careful selection, a sufficient number of points, and a proper sampling strategy [63]. The consistent results between the two validation approaches (HV and LOOCV) and modeling dataset suggest a minimal potential negative effect that may occur due to the sampling process, which is common for homogenous forests. By directly comparing the estimated HL information with field measurements over the independent dataset (HV), not used in the modeling process, the reliability of the models has been demonstrated. To avoid any potential uncertainties due to the number of sampling points, the LOCCV statistical approach additionally confirms the strong performance of the regression models, and ultimately reveals the performance of the image block orientation methods and their impact on the estimated HL information. The consistent results among the validation methods support the conclusion regarding the structural homogeneity of the study site, with no extreme values. Although the results are similar, it seems that the HV approach provides more realistic information than the LOOCV statistical approach. There are pros and cons for both methods, but the HV approach is preferable if there are enough measurements to be divided into modeling and validation subsets [64]. It is difficult to provide a general rule on the amount of data required for both modeling and validation, but it depends greatly on the complexity of the observed object [64,65] structure. However, as the collection of large amounts of field reference data for supporting remote sensing-based forest inventory is labor- and time-consuming, the LOOCV approach is a valid alternative. Thus, it is not a surprise that out of seven recently published studies, which explore the estimation of HL using UAS photogrammetry, six of them employ the LOOCV approach [14,40,41,42,43,44], and one of them uses the 10-fold CV method [45]. No direct field-based validation on an independent dataset (HV approach) was used in the previous studies, which makes the present study more systematic.

6. Conclusions

In this study, the effect of different orientation methods (InSO, GNSS-SO1, GNSS-SO2) applied to UAS images and PCs on the accuracy of plot-level HL estimates was tested on the example of mixed, even-aged pedunculate oak forests characterized by flat terrain. The results of this study confirmed the great potential of UAS photogrammetry in forest inventory, even when low-cost (consumer-grade) UASs were used. Such consumer-grade UASs are usually equipped with amateur cameras and low-quality GNSS sensors that record single-frequency GNSS data of lower accuracy. Thus, image orientation is usually performed with tie-points obtained from SfM and field-measured GCPs solely (InSO method) or with the addition of single-frequency GNSS measurements (GNSS-SO1 method). Despite its lower accuracy, this study showed that the addition of single-frequency GNSS measurements to tie-points and GCPs was useful because it improved the accuracy of HL estimates (GNSS-SO1 method). The highest accuracy of HL estimates was achieved by the GNSS-SO2 method, i.e., by adding PPK dual-frequency GNSS measurements to tie-points and GCPs. Unlike single-frequency GNSS measurements obtainable by consumer-grade UASs, high-accuracy GNSS measurements are obtainable only with professional and more expensive UASs, which certainly limits their availability. Small differences among the results obtained by different image block orientation methods indicated that, despite the low number and non-optimal spatial distribution of GCPs, they contributed considerably to the block stability and accuracy of image orientation. This is especially important for the InSO method, which relies solely on GCPs for georeferencing, as well as for the GNSS-SO method when using low-accuracy GNSS data. It can be assumed that the more or less homogeneous forest structure and flat terrain in the study area also contributed to small differences among the results obtained by different orientation methods. Consequently, greater differences could be expected in more complex forest areas in terms of forest structure and terrain characteristics. The findings of this research should serve as a basis for further studies that should include different forest types and other important forest variables such as diameter at breast height, basal area, volume, and biomass.

Author Contributions

Conceptualization, methodology and investigation, L.J. and I.B.; software, validation, formal analysis, L.J., M.G., A.S.M. and I.B.; writing—original draft preparation, L.J. and I.B.; supervision, writing—review and editing L.J., M.G., A.S.M. and I.B.; visualization, L.J., M.G.; project administration, L.J. and I.B.; funding acquisition, I.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research has been fully supported by the Croatian Science Foundation under the project IP-2016-06-7686 “Retrieval of Information from Different Optical 3D Remote Sensing Sources for Use in Forest Inventory (3D-FORINVENT)”. The work of doctoral student Luka Jurjević has been supported in part by the “Young researchers’ career development project—training of doctoral students” of the Croatian Science Foundation funded by the European Union from the European Social Fund.

Acknowledgments

We would like to express our gratitude to the technicians of the Croatian Forest Research Institute who helped in setting-up and measurements of sampling plots. Special thanks to Hrvatske vode, Zagreb, Croatia for providing LiDAR DTM.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lu, D. The potential and challenge of remote sensing-based biomass estimation. Int. J. Remote Sens. 2006, 27, 1297–1328. [Google Scholar] [CrossRef]
  2. White, J.C.; Coops, N.C.; Wulder, M.A.; Vastaranta, M.; Hilker, T.; Tompalski, P. Remote Sensing Technologies for Enhancing Forest Inventories: A Review. Can. J. Remote Sens. 2016, 42, 619–641. [Google Scholar] [CrossRef] [Green Version]
  3. Næsset, E. Predicting forest stand characteristics with airborne scanning laser using a practical two-stage procedure and field data. Remote Sens. Environ. 2002, 80, 88–99. [Google Scholar] [CrossRef]
  4. Coops, N.C.; Hilker, T.; Wulder, M.; St-Onge, B.; Newnham, G.; Siggins, A.; Trofymow, J.A. Estimating canopy structure of Douglas-fir forest stands from discrete-return LiDAR. Trees 2007, 21, 295–310. [Google Scholar] [CrossRef] [Green Version]
  5. Rahlf, J.; Breidenbach, J.; Solberg, S.; Næsset, E.; Astrup, R. Comparison of four types of 3D data for timber volume estimation. Remote Sens. Evniron. 2014, 155, 325–333. [Google Scholar] [CrossRef]
  6. Smreček, R.C.; Michnová, Z.V.; Sačkov, I.; Danihelová, Z.; Levická, M.; Tuček, J. Determining basic forest stand characteristics using airborne laser scanning in mixed forest stands of Central Europe. iForest 2018, 11, 181–188. [Google Scholar] [CrossRef] [Green Version]
  7. Ørka, H.O.; Bollandsås, O.M.; Hansen, E.H.; Næsset, E.; Gobakken, T. Effects of terrain slope and aspect on the error of ALS-based predictions of forest attributes. Forestry 2018, 91, 225–237. [Google Scholar] [CrossRef] [Green Version]
  8. Balenović, I.; Simic Milas, A.; Marjanović, H. A Comparison of Stand-Level Volume Estimates from Image-Based Canopy Height Models of Different Spatial Resolutions. Remote Sens. 2017, 9, 205. [Google Scholar] [CrossRef] [Green Version]
  9. Rahlf, J.; Breidenbach, J.; Solberg, S.; Næsset, E.; Astrup, R. Digital aerial photogrammetry can efficiently support large-area forest inventories in Norway. Forestry 2017, 90, 710–718. [Google Scholar] [CrossRef]
  10. Kangas, A.; Gobakken, T.; Puliti, S.; Hauglin, M.; Næsset, E. Value of airborne laser scanning and digital aerial photogrammetry data in forest decision making. Silva Fenn. 2018, 52, 19. [Google Scholar] [CrossRef] [Green Version]
  11. Goodbody, T.R.; Coops, N.C.; White, J.C. Digital Aerial Photogrammetry for Updating Area-Based Forest Inventories: A Review of Opportunities, Challenges, and Future Directions. Curr. Forestry Rep. 2019, 5, 55–75. [Google Scholar] [CrossRef] [Green Version]
  12. Noordermeer, L.; Bollandsås, O.M.; Ørka, H.O.; Næsset, E.; Gobakken, T. Comparing the accuracies of forest attributes predicted from airborne laser scanning and digital aerial photogrammetry in operational forest inventories. Remote Sens. Environ. 2019, 226, 26–37. [Google Scholar] [CrossRef]
  13. Strunk, J.; Packalen, P.; Gould, P.; Gatziolis, D.; Maki, C.; Andersen, H.E.; McGaughey, R.J. Large Area Forest Yield Estimation with Pushbroom Digital Aerial Photogrammetry. Forests 2019, 10, 397. [Google Scholar] [CrossRef] [Green Version]
  14. Puliti, S.; Ørka, H.O.; Gobakken, T.; Næsset, E. Inventory of Small Forest Areas Using an Unmanned Aerial System. Remote Sens. 2015, 7, 9632–9654. [Google Scholar] [CrossRef] [Green Version]
  15. Wallace, L.; Lucieer, A.; Malenovský, Z.; Turner, D.; Vopěnka, P. Assessment of Forest Structure Using Two UAV Techniques: A Comparison of Airborne Laser Scanning and Structure from Motion (SfM) Point Clouds. Forests 2016, 7, 62. [Google Scholar] [CrossRef] [Green Version]
  16. Panagiotidis, D.; Abdollahnejad, A.; Surový, P.; Chiteculo, V. Determining tree height and crown diameter from high-resolution UAV imagery. Int. J. Remote Sens. 2017, 38, 2392–2410. [Google Scholar] [CrossRef]
  17. Giannetti, F.; Chirici, G.; Gobakken, T.; Næsset, E.; Travaglini, D.; Puliti, S. A new approach with DTM-independent metrics for forest growing stock prediction using UAV photogrammetric data. Remote Sens. Environ. 2018, 213, 195–205. [Google Scholar] [CrossRef]
  18. Krause, S.; Sanders, T.G.; Mund, J.-P.; Greve, K. UAV-Based Photogrammetric Tree Height Measurement for Intensive Forest Monitoring. Remote Sens. 2019, 11, 758. [Google Scholar] [CrossRef] [Green Version]
  19. Colomina, I.; Molina, P. Unmanned aerial systems for photogrammetry and remote sensing: A review. ISPRS J. Photogramm. 2014, 92, 79–97. [Google Scholar] [CrossRef] [Green Version]
  20. Torresan, C.; Berton, A.; Carotenuto, F.; Di Gennaro, S.F.; Gioli, B.; Matese, A.; Miglietta, F.; Vagnoli, C.; Zaldei, A.; Wallace, L. Forestry applications of UAVs in Europe: A review. Int. J. Remote Sens. 2017, 38, 2427–2447. [Google Scholar] [CrossRef]
  21. Whitehead, K.; Hugenholtz, C.H.; Myshak, S.; Brown, O.; LeClair, A.; Tamminga, A.; Barchyn, T.E.; Moorman, B.; Eaton, B. Remote sensing of the environment with small unmanned aircraft systems (UASs), part 2: Scientific and commercial applications. J. Unmanned Veh. Syst. 2014, 2, 86–102. [Google Scholar] [CrossRef] [Green Version]
  22. Tang, L.; Shao, G. Drone remote sensing for forestry research and practices. J. For. Res. 2015, 26, 791–797. [Google Scholar] [CrossRef]
  23. Snavely, N.; Seitz, S.M.; Szeliski, R. Modeling the World from Internet Photo Collections. Int. J. Comput. Vis. 2008, 80, 189–210. [Google Scholar] [CrossRef] [Green Version]
  24. Fonstad, M.A.; Dietrich, J.T.; Courville, B.C.; Jensen, J.L.; Carbonneau, P.E. Topographic Structure from Motion: A New Development in Photogrammetric Measurement. Earth Surf. Process. Landf. 2013, 38, 421–430. [Google Scholar] [CrossRef] [Green Version]
  25. Remondino, F.; Spera, M.G.; Nocerino, E.; Menna, F.; Nex, F. State of the art in high density image matching. Photogramm. Rec. 2014, 29, 144–166. [Google Scholar] [CrossRef] [Green Version]
  26. Mlambo, R.; Woodhouse, I.H.; Gerard, F.; Anderson, K. Structure from Motion (SfM) Photogrammetry with Drone Data: A Low Cost Method for Monitoring Greenhouse Gas Emissions from Forests in Developing Countries. Forests 2017, 8, 68. [Google Scholar] [CrossRef] [Green Version]
  27. Goodbody, T.R.; Coops, N.C.; Hermosilla, T.; Tompalski, P.; Crawford, P. Assessing the status of forest regeneration using digital aerial photogrammetry and unmanned aerial systems. Int. J. Remote Sens. 2018, 39, 5246–5264. [Google Scholar] [CrossRef]
  28. Iglhaut, J.; Cabo, C.; Puliti, S.; Piermattei, L.; O’Connor, J.; Rosette, J. Structure from Motion Photogrammetry in Forestry: A Review. Curr. Forestry Rep. 2019, 5, 155–168. [Google Scholar] [CrossRef] [Green Version]
  29. Dall’Asta, E.; Thoeni, K.; Santise, M.; Forlani, G.; Giacomini, A.; Roncella, R. Network Design and Quality Checks in Automatic Orientation of Close-Range Photogrammetric Blocks. Sensors 2015, 15, 7985–8008. [Google Scholar] [CrossRef]
  30. Micheletti, N.; Chandler, J.H.; Lane, S.N. Structure from motion (SFM) photogrammetry. In Geomorphological Techniques; Clarke, L.E., Nield, J.M., Eds.; British Society for Geomorphology: London, UK, 2015; pp. 1–12, Online edition, Chapter 2, Section 2.2. [Google Scholar]
  31. Carrivick, J.L.; Smith, M.W.; Quincey, D.J. Structure from Motion in the Geosciences; John Wiley & Sons: Hoboken, NJ, USA, 2016; p. 197. [Google Scholar] [CrossRef]
  32. Benassi, F.; Dall’Asta, E.; Diotri, F.; Forlani, G.; Morra di Cella, U.; Roncella, R.; Santise, M. Testing Accuracy and Repeatability of UAV Blocks Oriented with GNSS-Supported Aerial Triangulation. Remote Sens. 2017, 9, 172. [Google Scholar] [CrossRef] [Green Version]
  33. Grayson, B.; Penna, N.T.; Mills, J.P.; Grant, D.S. GPS precise point positioning for UAV photogrammetry. Photogramm. Rec. 2018, 33, 427–447. [Google Scholar] [CrossRef] [Green Version]
  34. Zhang, H.; Aldana-Jague, E.; Clapuyt, F.; Wilken, F.; Vanacker, V.; Van Oost, K. Evaluating the potential of post-processing kinematic (PPK) georeferencing for UAV-based structure-from-motion (SfM) photogrammetry and surface change detection. Earth Surf. Dynam. 2019, 7, 807–827. [Google Scholar] [CrossRef] [Green Version]
  35. Sanz-Ablanedo, E.; Chandler, J.H.; Rodríguez-Pérez, J.R.; Ordóñez, C. Accuracy of Unmanned Aerial Vehicle (UAV) and SfM Photogrammetry Survey as a Function of the Number and Location of Ground Control Points Used. Remote Sens. 2018, 10, 1606. [Google Scholar] [CrossRef] [Green Version]
  36. Hugenholtz, C.; Brown, O.; Walker, J.; Barchyn, T.; Nesbit, P.; Kucharczyk, M.; Myshak, S. Spatial accuracy of UAV-derived orthoimagery and topography: Comparing photogrammetric models processed with direct geo-referencing and ground control points. Geomatica 2016, 70, 21–30. [Google Scholar] [CrossRef]
  37. Padró, J.C.; Muñoz, F.J.; Planas, J.; Pons, X. Comparison of four UAV georeferencing methods for environmental monitoring purposes focusing on the combined use with airborne and satellite remote sensing platforms. Int. J. Appl. Earth Obs. 2019, 75, 130–140. [Google Scholar] [CrossRef]
  38. Gašparović, M.; Seletković, A.; Berta, A.; Balenović, I. The Evaluation of Photogrammetry-Based DSM from Low-Cost UAV by LiDAR-Based DSM. South-East Eur. For. 2017, 8, 117–125. [Google Scholar] [CrossRef] [Green Version]
  39. Tomaštík, J.; Mokroš, M.; Saloň, Š.; Chudý, F.; Tunák, D. Accuracy of Photogrammetric UAV-Based Point Clouds under Conditions of Partially-Open Forest Canopy. Forests 2017, 8, 151. [Google Scholar] [CrossRef]
  40. Tuominen, S.; Balazs, A.; Saari, H.; Pölönen, I.; Sarkeala, J.; Viitala, R. Unmanned aerial system imagery and photogrammetric canopy height data in area-based estimation of forest variables. Silva Fenn. 2015, 49, 1348. [Google Scholar] [CrossRef] [Green Version]
  41. Ota, T.; Ogawa, M.; Mizoue, N.; Fukumoto, K.; Yoshida, S. Forest Structure Estimation from a UAV-Based Photogrammetric Point Cloud in Managed Temperate Coniferous Forests. Forests 2017, 8, 343. [Google Scholar] [CrossRef]
  42. Tuominen, S.; Balazs, A.; Honkavaara, E.; Pölönen, I.; Saari, H.; Hakala, T.; Viljanen, N. Hyperspectral UAV-imagery and photogrammetric canopy height model in estimating forest stand variables. Silva Fenn. 2017, 51, 7721. [Google Scholar] [CrossRef] [Green Version]
  43. Balenović, I.; Jurjević, L.; Simic Milas, A.; Gašparović, M.; Ivanković, D.; Seletković, A. Testing the Applicability of the Official Croatian DTM for Normalization of UAV-based DSMs and Plot-level Tree Height Estimations in Lowland Forests. Croat. J. For. Eng. 2019, 40, 163–174. [Google Scholar]
  44. Cao, L.; Liu, H.; Fu, X.; Zhang, Z.; Shen, X.; Ruan, H. Comparison of UAV LiDAR and Digital Aerial Photogrammetry Point Clouds for Estimating Forest Structural Attributes in Subtropical Planted Forests. Forests 2019, 10, 145. [Google Scholar] [CrossRef] [Green Version]
  45. Shen, X.; Cao, L.; Yang, B.; Xu, Z.; Wang, G. Estimation of Forest Structural Attributes Using Spectral Indices and Point Clouds from UAS-Based Multispectral and RGB Imageries. Remote Sens. 2019, 11, 800. [Google Scholar] [CrossRef] [Green Version]
  46. Picard, R.R.; Cook, R.D. Cross-validation of regression models. J. Am. Stat. Assoc. 1984, 79, 575–583. [Google Scholar] [CrossRef]
  47. Varma, S.; Simon, R. Bias in error estimation when using cross-validation for model selection. BMC Bioinform. 2006, 7, 91. [Google Scholar] [CrossRef] [Green Version]
  48. Dragčević, D.; Pavasović, M.; Bašić, T. Accuracy validation of official Croatian geoid solutions over the area of City of Zagreb. Geofizika 2016, 33, 183–206. [Google Scholar] [CrossRef]
  49. Stereńczak, K.; Mielcarek, M.; Wertz, B.; Bronisz, K.; Zajączkowski, G.; Jagodziński, A.M.; Ochał, W.; Skorupski, M. Factors influencing the accuracy of ground-based tree-height measurements for major European tree species. J. Environ. Manage. 2019, 231, 1284–1292. [Google Scholar] [CrossRef]
  50. Michailoff, I. Zahlenmässiges Verfahren für die Ausführung der Bestandeshöhenkurven [Numerical estimation of stand height curves]. Cbl. und Thar. Forstl. Jahrbuch 1943, 6, 273–279. [Google Scholar]
  51. Pepe, M.; Fregonese, L.; Scaioni, M. Planning airborne photogrammetry and remote-sensing missions with modern platforms and sensors. Eur. J. Remote Sens. 2018, 51, 412–436. [Google Scholar] [CrossRef]
  52. Terrasolid Ltd. 2012: Terrascan. Available online: http://www.terrasolid.fi/en/products/terrascan (accessed on 19 December 2019).
  53. Gašparović, M.; Simic Milas, A.; Seletković, A.; Balenović, I. A novel automated method for the improvement of photogrammetric DTM accuracy in forests. Šumar. List 2018, 142, 567–576. [Google Scholar] [CrossRef]
  54. Balenović, I.; Gašparović, M.; Simic Milas, A.; Berta, A.; Seletković, A. Accuracy Assessment of Digital Terrain Models of Lowland Pedunculate Oak Forests Derived from Airborne Laser Scanning and Photogrammetry. Croat. J. For. Eng. 2018, 39, 117–128. [Google Scholar]
  55. Rehak, M.; Skaloud, J. Time synchronization of consumer cameras on Micro Aerial Vehicles. ISPRS J. Photogramm. 2017, 123, 114–123. [Google Scholar] [CrossRef]
  56. Vautherin, J.; Rutishauser, S.; Schneider-Zapp, K.; Choi, H.F.; Chovancova, V.; Glass, A.; Strecha, C. Photogrammetric accuracy and modeling of rolling shutter cameras. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 3, 139–146. [Google Scholar]
  57. AgiSoft LLC. 2018: AgiSoft PhotoScan Professional (Version 1.4.3). Available online: http://www.agisoft.com/downloads/installer/ (accessed on 19 December 2019).
  58. Jurjević, L.; Balenović, I.; Gašparović, M.; Simić Milas, A.; Marjanović, H. Testing the UAV-based Point Clouds of Different Densities for Tree- and Plot-level Forest Measurements. In Proceedings of the 6th Conference for Unmanned Aerial Systems for Environmental Research, FESB, Split, Croatia, 27–29 June 2018; p. 56. [Google Scholar] [CrossRef]
  59. McGaughey, R.J. FUSION/LDV: Software for LiDAR Data Analysis and Visualization; Version 3.80; USDA Forest Service Pacific Northwest Research Station: Seattle, WA, USA, 2018; p. 209. [Google Scholar]
  60. Hill, T.; Lewicki, P. STATISTICS: Methods and Applications; StatSoft, Inc.: Tulsa, OK, USA, 2007; p. 800. [Google Scholar]
  61. Cassotti, M.; Grisoni, F.; Todeschini, R. Reshaped Sequential Replacement algorithm: An efficient approach to variable selection. Chemometr. Intell. Lab. 2014, 133, 136–148. [Google Scholar] [CrossRef]
  62. Seifert, E.; Seifert, S.; Vogt, H.; Drew, D.; van Aardt, J.; Kunneke, A.; Seifert, T. Influence of Drone Altitude, Image Overlap, and Optical Sensor Resolution on Multi-View Reconstruction of Forest Images. Remote Sens. 2019, 11, 1252. [Google Scholar] [CrossRef] [Green Version]
  63. Wulder, M.A.; White, J.C.; Nelson, R.F.; Næsset, E.; Ørka, H.O.; Coops, N.C.; Hilker, T.; Bater, C.W.; Gobakken, T. Lidar sampling for large-area forest characterization: A review. Remote Sens. Environ. 2012, 121, 196–209. [Google Scholar] [CrossRef] [Green Version]
  64. Wang, M.; Beelen, R.; Eeftens, M.; Meliefste, K.; Hoek, G.; Brunekreef, B. Systematic evaluation of land use regression models for NO2. Environ. Sci. Technol. 2012, 46, 4481–4489. [Google Scholar] [CrossRef]
  65. Hoek, G.; Beelen, R.; De Hoogh, K.; Vienneau, D.; Gulliver, J.; Fischer, P.; Briggs, D. A review of land-use regression models to assess spatial variation of outdoor air pollution. Atmos. Environ. 2008, 42, 7561–7578. [Google Scholar] [CrossRef]
Figure 1. The small map (a) displays the location of the study area in Croatia. The main map (b) presents the study area in more detail with spatial distribution of sample plots (yellow circles), Ground Control Points (GCPs) (white triangles), and flight paths of the first (blue line) and second (red line) flights (background: orthophoto of aerial images from summer 2015).
Figure 1. The small map (a) displays the location of the study area in Croatia. The main map (b) presents the study area in more detail with spatial distribution of sample plots (yellow circles), Ground Control Points (GCPs) (white triangles), and flight paths of the first (blue line) and second (red line) flights (background: orthophoto of aerial images from summer 2015).
Remotesensing 12 00404 g001
Figure 2. The simplified methodological workflow of the study.
Figure 2. The simplified methodological workflow of the study.
Remotesensing 12 00404 g002
Figure 3. Differences (∆) between the single-frequency and dual-frequency exposure (camera) position estimates for: (a) the first flight, and (b) the second flight (∆E—easting, ∆N—northing, ∆h—altitude). Differences (∆) are expressed in HTRS96/TM (ESPG: 3765).
Figure 3. Differences (∆) between the single-frequency and dual-frequency exposure (camera) position estimates for: (a) the first flight, and (b) the second flight (∆E—easting, ∆N—northing, ∆h—altitude). Differences (∆) are expressed in HTRS96/TM (ESPG: 3765).
Remotesensing 12 00404 g003
Figure 4. Vertical differences (∆h) between: (a) InSO and GNSS-SO2 PCs for marginal plot; (b) GNSS-SO1 and GNSS-SO2 PCs for marginal plot; (c) InSO and GNSS-SO2 PCs for central plot; (d) GNSS-SO1 and GNSS-SO2 PCs for central plot.
Figure 4. Vertical differences (∆h) between: (a) InSO and GNSS-SO2 PCs for marginal plot; (b) GNSS-SO1 and GNSS-SO2 PCs for marginal plot; (c) InSO and GNSS-SO2 PCs for central plot; (d) GNSS-SO1 and GNSS-SO2 PCs for central plot.
Remotesensing 12 00404 g004
Figure 5. Observed vs. predicted plot-level Lorey’s mean height (HL) for selected models (Table 5): (a) modeling dataset; (b) hold-out validation (HV), i.e., field validation across independent sample plots; (c) the leave-one-out cross-validation (LOOCV).
Figure 5. Observed vs. predicted plot-level Lorey’s mean height (HL) for selected models (Table 5): (a) modeling dataset; (b) hold-out validation (HV), i.e., field validation across independent sample plots; (c) the leave-one-out cross-validation (LOOCV).
Remotesensing 12 00404 g005
Table 1. Summary of the main forest attributes for the 99 measured sample plots.
Table 1. Summary of the main forest attributes for the 99 measured sample plots.
Forest AttributeMinimumMaximumMeanStandard Deviation
Age (years)431488641
Mean dbh (cm)17.069.432.814.0
Lorey’s mean height (m)18.237.926.35.2
Stem density (trees∙ha−1)561840534375
Basal area (m2∙ha−1)13.756.429.97.6
Volume (m3∙ha−1)158.0963.5398.8149.3
Table 2. Technical characteristics of Trimble UX5 HP Unmanned Aerial System (UAS) used in this study.
Table 2. Technical characteristics of Trimble UX5 HP Unmanned Aerial System (UAS) used in this study.
UASTrimble UX5 HP
TypeFixed wing
Weight2.4 kg
Wingspan1 m
Battery14.8 V, 6600 mAh
Endurance40 min
CameraSony ILCE-7R
Sensor sizeFull Frame (35.9 × 24 mm)
Field of viewW 55°, H 37°
Image size7360 × 4912
Focal length35 mm
GNSS receiverDual-frequency L1/L2 (GPS, Glonass, Beidou, Galileo ready)
Table 3. Basic statistics of differences between the single-frequency and dual-frequency exposure (camera) position estimates for the first and second flights (E—easting, N—northing, h—altitude).
Table 3. Basic statistics of differences between the single-frequency and dual-frequency exposure (camera) position estimates for the first and second flights (E—easting, N—northing, h—altitude).
FlightMean ErrorMean Absolute ErrorMedian Absolute Deviation
E (m)N (m)h (m)E (m)N (m)h (m)E (m)N (m)h (m)
First−0.45−1.350.803.281.350.872.930.130.40
Second−1.060.233.042.940.263.042.580.120.47
Table 4. Metrics extracted from UAS point clouds (PCs), calculated for each plot and used as potential independent variables in statistical analysis (generation and validation of Lorey’s mean height models).
Table 4. Metrics extracted from UAS point clouds (PCs), calculated for each plot and used as potential independent variables in statistical analysis (generation and validation of Lorey’s mean height models).
Variable GroupVariable (Abbreviation and Description)
Height metricshmin—minimum height; hmax—maximum height; hmean—mean height; hmode—mode height
Height variability metricsSD—standard deviation; VAR—variance; CV—coefficient of variation; IQ—interquartile distance; Skew—skewness; Kurt—kurtosis; AAD—average absolute deviation; MADmed—Median of the absolute deviations from the overall median; MADmode—Median of the absolute deviations from the overall mode; CRR—Canopy relief ratio ((mean − min)/(max − min)); SQRTmeanSQ—Generalized mean for the 2nd power (Elevation quadratic mean); CURTmeanCUBE—Generalized mean for the 3nd power (Elevation cubic mean); L1, L2, L3, L4—L moments; LCV—moment coefficient of variation; Lskew—moment skewness; Lkurt—moment kurtosis
Height percentilesPh (h = 1st, 5th, 10th, 20th, 25th, 30th, 40th, 50th, 60th, 70th, 75th, 80th, 90th, 95th, 99th percentiles)
Canopy cover metricsCCh—percentage of points above h (h = 2 m, 5 m, 10 m, 15 m, 20 m, 25 m, 30 m, mean, mode)
Table 5. Plot-level HL models (equations with parameters and included predictors) developed using multivariate linear regression (backward stepwise) for each point cloud (orientation method).
Table 5. Plot-level HL models (equations with parameters and included predictors) developed using multivariate linear regression (backward stepwise) for each point cloud (orientation method).
Orientation MethodModel
InSOHL = 3.085 + 0.485·hmax + 0.641·AAD + 0.291·SQRTmeanSQ
GNSS-SO1HL = 2.365 + 0.529·hmax + 0.511·CURTmeanCUBE + (−6.202)·Lkurt + (−0.180)·P5
GNSS-SO2HL = 0.864 + 0.298·SD + 0.880·P95
Table 6. Results of multivariate linear regression (modeling dataset) and validation results (HV and LOOCV) for selected plot-level HL models described in Table 5.
Table 6. Results of multivariate linear regression (modeling dataset) and validation results (HV and LOOCV) for selected plot-level HL models described in Table 5.
DatasetOrientation MethodR2adjRMSE (m)RMSE%
(%)
ME
(m)
ME%
(%)
Modeling datasetInSO0.9431.1824.488< 0.001< 0.001
GNSS-SO10.9531.0674.050< 0.001< 0.001
GNSS-SO20.9581.0243.888< 0.001< 0.001
Validation
(HV)
InSO0.9211.4585.5510.1690.643
GNSS-SO10.9251.4035.3440.1670.637
GNSS-SO20.9351.3615.1830.0680.259
Validation (LOOCV)InSO0.9371.2764.843−0.015−0.056
GNSS-SO10.9481.1504.365−0.007−0.026
GNSS-SO20.9551.0694.0570.0010.004

Share and Cite

MDPI and ACS Style

Jurjević, L.; Gašparović, M.; Milas, A.S.; Balenović, I. Impact of UAS Image Orientation on Accuracy of Forest Inventory Attributes. Remote Sens. 2020, 12, 404. https://doi.org/10.3390/rs12030404

AMA Style

Jurjević L, Gašparović M, Milas AS, Balenović I. Impact of UAS Image Orientation on Accuracy of Forest Inventory Attributes. Remote Sensing. 2020; 12(3):404. https://doi.org/10.3390/rs12030404

Chicago/Turabian Style

Jurjević, Luka, Mateo Gašparović, Anita Simic Milas, and Ivan Balenović. 2020. "Impact of UAS Image Orientation on Accuracy of Forest Inventory Attributes" Remote Sensing 12, no. 3: 404. https://doi.org/10.3390/rs12030404

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop