Next Article in Journal
Construction of the Long-Term Global Surface Water Extent Dataset Based on Water-NDVI Spatio-Temporal Parameter Set
Previous Article in Journal
The Accuracy of Real-Time hmF2 Estimation from Ionosondes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Combining Interior Orientation Variables to Predict the Accuracy of Rpas–Sfm 3D Models

by
Alessandra Capolupo
1,*,
Mirko Saponaro
1,
Enrico Borgogno Mondino
2 and
Eufemia Tarantino
1
1
Department of Civil, Environmental, Land, Construction and Chemistry (DICATECh), Politecnico di Bari, Via Orabona 4, 70125 Bari, Italy
2
Department of Agriculture, Forest and Food Sciences (DISAFA), Università degli Studi di Torino, Largo Braccini 2, 10095 Grugliasco (TO), Italy
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(17), 2674; https://doi.org/10.3390/rs12172674
Submission received: 2 July 2020 / Revised: 6 August 2020 / Accepted: 18 August 2020 / Published: 19 August 2020

Abstract

:
Remotely piloted aerial systems (RPAS) have been recognized as an effective low-cost tool to acquire photogrammetric data of low accessible areas reducing collection and processing time. Data processing techniques like structure from motion (SfM) and multiview stereo (MVS) techniques, can nowadays provide detailed 3D models with an accuracy comparable to the one generated by other conventional approaches. Accuracy of RPAS-based measures is strongly dependent on the type of adopted sensors. Nevertheless, up to now, no investigation was done about relationships between camera calibration parameters and final accuracy of measures. In this work, authors tried to fill this gap by exploring those dependencies with the aim of proposing a prediction function able to quantify the potential final error in respect of camera parameters. Predictive functions were estimated by combining multivariate and linear statistical techniques. Four photogrammetric RPAS acquisitions were considered, supported by ground surveys, to calibrate the predictive model while a further acquisition was used to test and validate it. Results are preliminary, but promising. The calibrated predictive functions relating camera internal orientation (I.O.) parameters with final accuracy of measures (root mean squared error) showed high reliability and accuracy.

Graphical Abstract

1. Introduction

Descriptions of the earth’s morphology and environmental processes involve the adoption of a large amount of data from a wide range of branches of knowledge, such as geology [1], hydrology [2] and geomorphology [3]. All of them require a digital surface model (DSM) as the basis to describe surface morphometry. DSM resolution and accuracy strongly affect landforms mapping: in fact, geomorphic elements, located at the same position, can be differently interpreted depending on DSM resolution [4]. Consequently, DSM resolution must be consistent with the size of the investigated element to optimize results [5]. Along the years, several approaches were proposed to improve 3D models accuracy and detail level of survey. Among these, terrestrial laser scanner (TLS) was widely considered as the gold standard to meet such requirements [6], even if its application determines high costs of both technological equipment and acquisition/processing data time. Researchers looked for a valid low-cost alternative to overcome these operational limits. Rieke-Zapp et al. [7] demonstrated that photogrammetry is a convincing tool to generate 3D models having comparable accuracy and resolution in respect of those from TLS.
RPAS (remotely piloted aerial systems) represent the most significant innovation affecting the photogrammetric field: they allow flights over low accessible areas, are timely flexible and permit detailed acquisitions at low minimizing cloud cover related issues [8]. Moreover, they deeply penetrated the market stimulating the development of new sensors and determining a significant reduction of related costs [9]. Recent technological advances have permitted weight and size reduction of sensors making them suitable for small flying platforms [9]. Data processing techniques have also improved: in particular, structure from motion (SfM) and computer vision (CV) approaches enormously empowered the photogrammetric workflow in terms of accuracy, spatial resolution and easiness. Consequently, the integration of RPAS, SfM and CV nowadays allows to generate highly detailed textured models comparable to those generated through more traditional methods.
For these reasons, RPAS have been widely proposed to face various environmental issues [10,11,12,13,14]. Many scholars have focused on factors influencing the final accuracy of RPAS measurements. Flight height and sensor technical features (pixel size, focal length, sensor size, etc.) together with sensor interior orientation modeling can strongly affect final results [15,16]. A validated methodology for maximizing reliability of the final products was designed by [17,18], taking into account several aspects like: (a) influence of flight plan geometry [11,19]; (b) impact of ground control points (GCPs) georeferencing and distribution [20,21,22]; (c) optimization of parameterization during the bundle adjustment phase [23,24].
In this context, a key element is the estimation of camera calibration coefficients. In fact, RPAS are commonly equipped with low-cost, nonmetric cameras whose internal orientation (I.O.) is natively unknown and must be, somehow, estimated [25]. Focal length (f), principal point offset (xp, yp), radial (K1, K2, K3, K4) and decentering (P1, P2, P3, P4) coefficients of lens distortion functions are the main I.O. parameters. Their knowledge is essential to generating accurate measurements, but, while using SfM, they show a strong geometric instability depending on the low photogrammetric quality of the camera [26]. Differently, for CV-based applications focal length is the only parameter considered in the camera calibration procedure [27]. Moreover, as demonstrated in [28], the I.O. parameter estimation from a low-cost, nonmetric camera are extremely complex and, sometimes, impossible. To face this issue, numerous algorithms have been implemented in the most widespread photogrammetric software, e.g., Agisoft PhotoScan Professional (Agisoft LLC—St. Petersburg, Russia) [29], Pix4D [30], APERO [31], Graphos [32], VisualSFM [33] and MicMac [34]. Nevertheless, results do not seem to be stable and vary according to the processed set of images, even if referred to the same area [35]. Much research has explored the impact of camera calibration parameters on the final results, but without trying to model the existing relationship [36].
This study is intended to fill this gap by exploring the relationship among I.O. parameter estimates and modeling predictive functions. These goals were achieved with reference to several sets of data collected in different times on the same area: the shoreline of Torre a Mare (Bari, Southern Italy), well known for the beauty of its landscape. Finally, a new approach, based on combination of uni- and multivariate statistics, suitable for predicting error components affecting final 3D models is proposed.

2. Materials and Methods

2.1. Study Area

A shoreline stretch of about 400 m located in Torre a Mare (Apulian region), a district 12 km away from Bari, was selected as pilot site. The area presents a rugged coastline, characterized by an irregular elevation ranging between 1 and 5 m—with a large number of small sandy bays and coves (Figure 1) [20]. Over the years, reefs collapsed, and their shapes were modeled by anthropic and natural phenomena, including soil erosion by sea tides, determining a typical stepped profile. Slopes are pointed, while overlying surfaces, dated back to the Upper Pleistocene age, are more shaped and smoothed. Geologists confirmed the wide morphogenetic variability of coastline structure in this area [20].
Subhorizontal surfaces, close to the reef, are mainly composed of calcarenitic lithotypes showing a peculiar resistance to mechanical erosion processes. At the global scale, stratified layers and fracturing systems, subparallel to the coastline (ONO ESE), determine favorable conditions for land instability due to hydric erosion, accelerating natural withdrawal of coast fractions (Figure 2). Moreover, local lack of vegetation increases imperviousness and, consequently, facilitate erosion dynamics [32]. The area is extremely complex from the archaeological perspective, as well. It hosts archeological evidence of a Neolithic community that was pulled out by excavation campaigns carried out in the past; currently, the area is in an evident state of abandon ([37,38]).

2.2. Field Data Campaigns and Operative Workflow

An extensive flight campaign was planned between 2018 and 2019. Flight area was selected far away from urban sites, to compliant Italian national regulations about RPAS operations [39,40]. Five flights were programmed and performed in December 2018, January 2019, February 2019, March 2019 and October 2019, respectively (Table 1). Surveys were interrupted between April and September 2019 to respect operational restrictions related to touristic season [39].
Flight were operated by a commercial quadcopter DJI Inspire 1, mounting a consumer DJI Zenmuse X3 camera (focal length 3.61 mm, pixel size 1.56 μm, effective pixels 12.4 M) equipped with a 3−axis gimbal (to compensate accidental movements of the drone). The DJI Ground Station Pro app, proposed by the Chinese company DJI (Dà−Jiāng Innovations) [41] was also used to automatize the flight plan. It ensured that all the flights were achieved along the same path and under the same conditions, e.g., cruising speed (4.0 m/s) and altitude (100 m AGL, above ground level) (Figure 3). Missions were planned to obtain an average Ground Sampling Distance (GSD) of 0.043 m/pix, a forward and side overlaps of 85% and 75%, respectively, as suggested by [42,43]. A total of 77 images per flight were acquired. Camera was set nadiral and the stop&go mode was applied to reduce collection of blurry images due to forward motion [42]. RPAS position was recorded by a low cost GNSS/INS positioning receiver and saved into a metadata file.
To operate all along the tests with the same ground control points, a GNSS survey campaign was operated to position permanent natural elements having adequate size and color with respect to RPAS image features. Thirty homogeneously distributed points were consequently, surveyed by Leica Viva CS10/GS10 GNSS receiver and successively used as ground control points (GCP) or check points (CPs). Survey was operated in a network real time kinematic (NRTK) mode based on Leica SmartNet Italpos network and determined a final 3D accuracy of 0.02 m. Reference system was RDN2008/UTM zone 33N (NE) (EPSG: 6708).
Image datasets were separately processed according to the workflow of Figure 4. Image blocks of December, January, February and October, were used to calibrate the predictive model (see forward on), while the March dataset was used for validation.

2.3. Photogrammetric Products Generation

This study relies on the workflow proposed by [11,44,45,46]. The work flowchart is shown in Figure 5; steps are detailed in the next sections. Agisoft PhotoScan (v.1.4.1, Agisoft LLC −St. Petersburg, Russia) software, currently known as Metashape, was used during the work to photogrammetrically process the data.

2.3.1. First Step: Setting-Up Workspace and Dataset

This first step was aimed at properly setting workspace and removing blurry images, possibly compromising final outcomes. Five chunks (image blocks) of the same scenario were created and, for each of them, the same processing parameters were set to guarantee results comparability (Table 2).
Camera positioning accuracy was set equal to 3 m to make it consistent with the average 3D positioning accuracy value of the RPAS GNSS receiver (approximately 2.54 m). This value is known to depend on GNSS epoch recording frequency (Hz) and, consequently, on RPAS speed [46]. Image attitude accuracy was set equal to the software default value (10 degrees) since no information was available about attitude measurements from RPAS-integrated IMU (inertial measurement unit). This value was considered precautionary. Agisoft PhotoScan allows weights parameters during bundle block adjustment (BBA): tie points estimation are generally three times less relevant than GCPs accuracy during the reconstruction phase [17]. Thus, a value equal to 3 was set in its parameterization. All of these parameters are crucial being directly involved in collinearity equations [47,48,49] and, consequently, they heavily affect final accuracy of solution. Software default values were instead accepted as weights for attitude parameters (yaw, tilt, roll).
A quality assessment was performed to detect and remove “bad” images. This step is mandatory to improve final accuracy. A large number of issues conditioning image quality depends on the acquisition mode. For instance, the adoption of RPAS to image a complex scenario, like the study area, involves several problems due to the interaction between environmental conditions and equipment; in particular, heat and magnetic sources can impact on inertial measurement unit (IMU) and GNSS [8]. The “estimate image quality” procedure available within Agisoft PhotoScan was used to assess image characteristics, providing information about sharpness and detecting blurring and distortion. This procedure returns a score ranging between 0 and 1: the higher the value, the better the quality [50,51]. Moreover, to balance colors of final products, all images were homogenized in terms of brightness and contrast. Radiometric adjustments are known to do not condition the efficiency of SIFT algorithm during tie point detection.
A scale-invariant feature transform (SIFT) approach was used to minimize projection errors [52,53] and facilitate extraction of homologous points independently from brightness conditions, that, in the datasets, were strongly influenced by the presence of sea.

2.3.2. Second Step: Image Block Orientation

The II step was aimed at automatically collecting tie points (sparse points cloud) and solving image orientation [54,55]. It was separately run for each chunk using the ‘High’ accuracy mode and setting a threshold = 0 for the “limits of key points and tie points” parameter. This choice was preferred to avoid uncontrolled filtering of measured points.
Block Bundle Adjustment (BBA) was performed including I.O. parameters within the model unknowns to be estimated using the camera optimization panel available in Agisoft PhotoScan. BBA outputs correspond to the estimates of tie-points coordinates in the object space, I.O. and external orientation (E.O.) parameters, GNSS lever arm offset. Moreover, BBA scales the entire photogrammetric block proportionally with respect to GCPs. Camera I.O. parameters estimates from all the chunks were recorded and organized into a table to be successively analyzed.

2.3.3. Third Step: Filtering and Georeferencing

Sparse point clouds (tie-point positions in the object space) were generated and systematic errors, mainly caused by nonlinear distortions of lens [54], estimated; measured points were manually filtered to optimize results and minimize image block distortions. Three criteria implemented in the “gradual selection” tool were considered: (a) photogrammetric restitution uncertainty; (b) projection accuracy; (c) reprojection error.
(a) is intended to remove points with low base–height ratios [49], i.e., all those points located at the edges of images, generally characterized by a higher degree of restitution uncertainty, that majorly depends on a too small overlapping among pictures. Although this option does not affect final accuracy, it is useful for thinning clouds [56]. Conversely, (b) is aimed at detecting and cleaning out less reliable tie points [49]. A threshold value equal to 3 was used to exclude tie points with an uncertainty 3 times higher than the minimum one; (c) was intended to remove all points with a large residual value in order to decrease drastically restitution errors improving orientation parameters estimates [49]. It is worth to remind that, residuals directly impact on representativeness of the root mean square error (RMSE, [40]) of GCPs and CPs, possibly making it not suitable to define the actual final accuracy of measurements [51].
The three above-mentioned criteria permitted to remove the most of inaccurate points from clouds, thus improving consistency between model and reality. Threshold values adopted for each criterion were defined accordingly to previous works [42,43]. After filtering, about 20% of originally measured points were removed. To further improve geometric modeling of the area, 30 GCPs were used [5], ensuring a marker reprojection error less than 0.5 pixels. Once the alignment step was accomplished, a “progressive” cross-validation analysis was achieved to test actual accuracy of final measurements, as explained in Section 2.3.4.

2.3.4. Fourth Step: Progressive Cross-Validation

With respect to the 30 surveyed points, a cross-validation was run in a progressive mode, i.e., progressively migrating 1 point a time from the training (GCPs) to the validation (CPs) set. The choice of those points to be progressively moved from GCPs to CPs was accomplished in a balanced way; peripheral and central points were alternatively migrated taking care of maintaining a proper spatial distribution of remaining GCPs ([22,57]). Thirty one chunks of CPs were finally obtained varying from 0 (all surveyed points were used as GCPs) to 30 points (all surveyed points were used as CPs and the image orientation completely relied on a direct georeferencing approach) [50,58]. We refer to the 0 CPs situation as complete indirect georeferencing (CIG) and to the 30 CPs situation as direct georeferencing (DG) [21,50]. To make the procedure repeatable, points were imported in the same order for all analyzed subdatasets. Table 3 summarizes the order that was followed while migrating a point from GCP to CP.
Measures accuracy was estimated with reference to RMSE computed for both GCPs and CPs. RMSE was used to quantify error components (definitions are given forward on), with the hypothesis that both random and systematic errors were Gaussian distributed [59]. It is worth to remind that RMSE of GCPs only provide information about the goodness of fitting of calibrated equations, with no concern about the model capability of generalization.

2.4. Analyses of I.O. Parameters Estimates

Camera calibration by SfM implies that camera I.O. parameters (including lens distortion) are precisely known to minimize errors possibly affecting restitution step [60]. Such parameters are image independent and, consequently, they do not depend on position and attitude of the camera [54]. Several approaches have been proposed over the years to get appropriate estimates/measures of these parameters; camera self-calibration guarantees several benefits when working with low-cost cameras. It relies on the numeric estimate of I.O. parameters that are included among the unknowns in the equation system that is solved during BBA. The solution is consistent with the data [61], with the assumption that I.O. parameters continuously vary in time and, consequently, need to be estimated for each specific image block.
Many algorithms can be used when facing this problem by SfM. Agisoft PhotoScan makes available the 10 parameters in Brown’s model [60,62,63,64,65] (Equations (1) and (2), [49].
Δ x = x ¯ c   Δ f + x ¯ r 2 K 1 + x ¯ r 4 K 2 + x ¯ r 6 K 3 + x ¯ r 8 K 4 + [ ( 2 x ¯ 2 + r 2 ) P 1 + 2 P 2 x y ¯ ] ( 1 + P 3 r 2 + P 4 r 4 ) + B 1 x ¯ + B 2 y ¯
Δ y = y ¯ c   Δ f + y ¯ r 2 K 1 + y ¯ r 4 K 2 + y ¯ r 6 K 3 + x ¯ r 8 K 4 + [ 2 P 1 x y ¯ + ( 2 y ¯ 2 + r 2 ) P 2 ]   ( 1 + P 3 r 2 + P 4 r 4 )
where f is the focal length, Δ x and Δ y represents the image corrections, Δ f is the correction to the initial principal distance value, x ¯ and y ¯ are the coordinates of a general poin, Ki are the radial distortion coefficients, Pi are the tangential distortion coefficients, Bi are the in-plane correction parameters for differential scaling between the horizontal and vertical pixel spacing and non-orthogonality (axial skew) between x and y axes, r is the image radial distance estimated using Equation (3):
r 2 = x ¯ 2 + y ¯ 2 = ( x x p ) 2 + ( y y p ) 2
where xp and yp represents the principal point coordinates.
The full 10-parameter model, Equations (1)–(3), is adopted as the default one when using the fully automatic camera calibration procedure. Although, practically, it could appear as the best choice, from an accuracy point of view, in many cases, it does not represent the optimal solution, because of the different role and significance of those parameters within the BBA numeric solution [55]. For example, a high correlation was found between Pi coefficients and principal point coordinates. Consequently, when removing Pi from the unknowns, xp and yp somehow can absorb associated variation. In other words, users could not consider Pi in the calibration phase and obtain similar results. With these premises, a preventive analysis was achieved to figure out eventual correlation existing among I.O. parameters and trying to minimize it.

2.5. Accuracy Assessment

Accuracy assessment was aimed at: (a) testing the quality of the original dataset; (b) testing the accuracy of the photogrammetric measurements.
Triggs et al. [54] highlighted that accuracy of photogrammetric products depends on many factors: quality of the processed images, GSD and camera type. Consequently, all these factors were considered. The first factor was taken into account (trying to minimize its effects) using the image quality tool and selecting a scale-invariant feature transform (SIFT) approach while running BBA by SfM (see Section 2.3.1).
Errors affecting final measures from oriented image blocks were evaluated with reference to the RMSE. It was computed separately for all the error components and for both GCPs and CPs. RMSEE for the East coordinate, RMSEN for the North coordinate, RMSEH for the height coordinate, RMSET as the total 3D error and, finally, RMSEI for the positioning error (total) in the image space.
Agisoft PhotoScan Professional software can automatically and iteratively compute this parameter [48]. This software feature is important, permitting reiteration of trials that can be run selectively tuning all the involved parameters, including GCPs collection refinement.

2.6. PCA and Synthetic Index Generation

I.O. estimates and errors computed during BBA were compared to test their reciprocal relationship. I.O. estimates were preventively preprocessed by a self-developed R routine [66] aimed at extracting the most significant information by principal component analysis (PCA), based on the variance maximization principle [67]. PCA, probably the most popular multivariate statistical method, is intended for dimensionality reduction, obtained by removing redundant information of a multivariate dataset where variables can be intercorrelated [67]. After detecting the most relevant components, it converts the original dataset into a new one consisting of independent and orthogonal vectors, called principal components (PCs, [67]). The first component provides the most of information, describing the largest part of the input data inertia, absorbing (explaining) the most of data variance; the second component is orthogonal to the first one and absorbs (explain) the most of the remaining variance [68]. The same principle is used to obtain all the other components that, necessarily, will represent a decreasing level of information while incrementing their position within the transformation [69]. Consequently, first components compress most of the native information making possible to drastically reduce dimensionality of data by removing redundant content [62].
The singular value decomposition (SVD) approach [69] was applied to compute principal components. The number of explanatory PCs was selected by the Kaiser’s criterion [70] that suggest setting an eigenvalue threshold = 1.0. Selected PCs were weighted and linearly combined in a “synthetic index” (hereinafter called SI). SI was obtained as the weighted average of all significant components (Equation (4)). Weights were directly extracted from the PCA procedure:
S I = w i D i m i w i
where wi and Dimi are the weights and the principal components, respectively.

2.7. Predicting Accuracy of Measurements: Model Definition

Pearson’s coefficient (R) was computed between SI and error components affecting final measurements from oriented blocks. The expectation was that SI could be a predictor of error components (generically called RMSEj). According to the obtained R values (only high or moderate correlation was considered) an interpolation function was calibrated to predict the following errors: RMSEE, RMSEN, RMSEH, RMSET, RMSEI. SI was assumed as independent variable (x, predictor) of calibrated functions. A 2nd order polynomial (Equation (5)) was found to well fit all significant relationships. Model parameters (a, b and c) were estimated by ordinary least squares for each investigated error (y). Goodness of fitting was tested with reference to the coefficient of determination (R2):
y = a x 2 + b x + c

3. Results

3.1. Accuracy of Photogrammetric Measurements

Suitability of available images to generate a reliable photogrammetric product was assessed through the application of the image quality tool in Agisoft PhotoScan. All images showed a satisfying value presenting an average image quality index equal to 0.8 (higher than the threshold value set to 0.5); consequently, no image was removed.
Accuracy of photogrammetric measurements were assessed with reference to RMSET for both GCPs and CPs. They were tested against the number of GCPs used during BBA (Figure 6 and Figure 7). Highest RMSE values (for both GCPs and CPs) correspond to the solution obtained using 1 or 2 GCPs. This confirms that, minimally 3 points are needed to properly orient a 3D model in a 3D space to recover translation, rotation and scaling values [8]. Nevertheless, the SfM/SIFT based approach can provide reasonable solutions of BBA even when GCPs number is lower than the theoretically minimum one, i.e., 3, just exploiting position and attitude data from RPAS GNSS/IMU low quality system. In spite of this interesting consideration characterizing the new digital photogrammetry, what is evident is that accuracy and processing time increases with GCPs number.
Figure 7 shows RMSET (for CPs) as computed while varying the number of GCPs during BBA. In this case, the CIG method is not reported. As previously, highest RMSE values were obtained when GCPs number was lower than 3, demonstrating that GNSS and IMU measures from RPAS system, in spite of their low quality, can drive to a reasonable solution (0.5–1 m) that could be accepted for many applications. This appeared to be similar for both CGPs and CPs, making those RMSE values sufficiently representative of the actual accuracy condition when working in DG mode. For the goals of this work, this fact is secondary and certainly must be overcame focusing the attention on the best expected solutions. In general, one can say that a good solution for BBA is the one when RMSEGCP and RMSECP are consistent each other. With respect to Figure 6 and Figure 7 this situation occurs reaching a number of well distributed GCPs around 14. More GCPs would appear as useless.

3.2. Testing Relationship between Errors and I.O Parameters

Table 4 reports the main statistics computed for all the estimated I.O. parameters with respect to all processed dataset. It can be noted that I.O. parameters can be grouped in two clusters: one including xp, yp, B1, B2, P3 and P4 is characterized by a great variability; another one, including remaining parameters, shows pretty similar values for all the computed metrics. Since all trials generated similar estimates of radial and tangential distortion parameters, these can be retained strictly dependent on the camera with no conditioning by other factors. Moreover, Pi coefficients are known to be less significant than radial ones (one or two orders of magnitude smaller [59]). Boxplots of Figures 9–13 confirm this fact.
A further synthetic analysis can come from the coefficient of variation (CV = standard deviation/mean × 100.0, %) of I.O. parameters estimates. Results are reported in Figure 8. It can be noted that focal length f and Ki coefficients are the most stables parameters. Conversely, xp, yp, Bi and Pi coefficients present a wide variability depending on operational conditions.
According to the boxplots of Figure 9, Figure 10, Figure 11, Figure 12 and Figure 13 a similar trend can be recognized affecting all I.O. parameters. Specifically, the October dataset (Figure 13) appears to not contain any outlier for the most of parameters (f, K1–K2–K3–K4–P3–P4). Oppositely, in the other datasets no outlier was found for B1, P1, P2 in the December dataset (Figure 9); yp and xp did not present outliers in the January and February datasets (Figure 10 and Figure 11); f, P4 and K2 had no outliers in the March dataset (Figure 12).
The relationships among I.O. parameters were explored by computing the correlation matrices for all the datasets. Results are reported in Table 5, Table 6, Table 7, Table 8 and Table 9. The correlation matrix contains positive and negative values in the range [−1, +1]. Positive values mean that a direct relationship between variables is present; negative values an inverse linkage. The higher the values, the stronger the correlation. All the datasets showed pretty similar values, that can be synthesized as it follows: focal length was highly correlated with radial aberrations coefficients; xp was moderately correlated with yp and, in the most of cases, with Pi coefficients; yp was poorly/moderately correlated with all the other parameters (March dataset excluded); skew parameters (B1,B2) were poorly/moderately correlated with the other parameters (March dataset excluded); Ki coefficients showed to be internally correlated each other and, externally, with the focal length; a moderate correlation was found between yp and P2. January and February dataset showed low correlation between Ki with yp and P2. In spite of all specific situations, these results suggested that I.O. parameters contain redundant information. Consequently, a PCA was operated showing that the most of the decorrelated information could be explained by few PCs, whose identification was operated by Kaiser’s criterion.
The errors affecting final measures from oriented image blocks were evaluated with reference to the above mentioned RMSEj. Analysis was separately performed for both GCPs and CPs and for all the datasets (Table 10). It showed similar values for all the datasets. RMSEI defines the reprojection error computed as the mean distance (in the image space) between the position expected for a tie point that participated to solve block orientation and the one resulting by reprojection of the correspondent 3D object point after image resection. To minimize alignment issues, the maximum values of error should be <1, according to [44]. This condition was respected for all the blocks: the maximum detected value was 0.31 and 0.36 for GCPs an CPs (Table 11), respectively. Conversely, RMSEN, RMSEH, RMSET define the distance (in the object space) between the expected position of a GCP (or CP) and the one determined by photogrammetric measurement in the object space. The smaller this value, the higher the final accuracy. These parameters are known to be strongly influenced by user’s experience in recognizing the proper markers (shape, color, stability, etc.) within image and operate the correspondent collimation. The highest RMSET values were detected in the December (0.98 m) and March (1.01 m) datasets.
The correlation matrices were computed to analyze possible relationships among error components for both GCPs and CPs (Figure 10 and Figure 11). Results showed a high direct correlation among all the variables, reprojection error excluded (RMSEI).

3.3. Calibrating Predictive Models

Camera I.O. parameters estimates were analyzed by PCA to detect and remove redundant information. As usual, December, January, February and October blocks were separately processed to identify those PCs that, for each dataset, could synthesize the most of the original information. This was done with reference to the correlation plots relating I.O. parameters with PC (Figure 14). Application of Kaiser’s criterion showed that the strongest three PCs were enough to describe the most of I.O. parameters variance for the December (Figure 14a), January (Figure 14b) and October (Figure 14d) datasets. Conversely, five PCs were needed to explain the most of information resident in I.O parameters estimates from the February dataset (Figure 14c). These results confirmed what base statistics of Table 5, Table 6, Table 7, Table 8 and Table 9 had already shown: I.O. parameters are highly intracorrelated.
The most significant PCs, as resulting from the previous analysis, were aggregated by Equation (4) to compute SI that was used as predictor within the predictive functions of RMSE. SI values are reported in Table 12. Before calibrating predictive functions possibly relating SI to RMSE the Pearson’s coefficient was computed between SI and all the available RMSE for both GCPs and CPs (Table 13).
Only situations showing a moderate (0.5 < R < 0.7) or a high Pearson’s coefficient (>0.7) were modeled in the following step [71]. Consequently, since SI showed a moderate and high correlation with GCPs RMSEE and RMSEN, respectively these relationships were modeled. Conversely, since CPs RMSEE showed a low correlation with SI it was excluded from modeling.
A 2nd order polynomial showed to better fit the data. Results concerning GCPs and CPs are shown in Figure 15 and Figure 16, respectively. Graphs also report the coefficient of determination (R2). For GCPs, the lowest R2 value was 0.68 (RMSEE), while CPs RMSEN showed the highest R2 (0.99).
Table 14 reports the coefficients of the significant 2nd order polynomial predictive functions for both CPs and GCPs.

3.4. Validating Predictive Models

Reliability and accuracy of the proposed predictive method were tested by applying the calibrated models to all the available datasets, March included. With respect to March dataset specifically, PCA analysis was applied to recognize significant PCs able to explain the most of I.O. parameters variance. Two PCs were found to satisfy the Kaiser’s criterion and, consequently, they were used to compute March SI (3.060). March SI value was then used to predict RMSEj according to Equation (5) applied using the coefficients from Table 14. To summarize performances of models, all the RMSEj estimates were compared, by differencing, with the correspondent values from the BBA solutions (reference ones). RMSE was then computed for all the tested differences and estimated RMSEj. Results are reported in Table 15.

4. Discussion

This research was intended to explore dependency of the accuracy of measures from RPAS-based SfM 3D models from the camera I.O. parameters estimates. Several researchers investigated these issue, detecting the strong influence of camera parameters on final accuracy of obtainable measures. Nevertheless, no proposal came for an operative procedure able to predict the potential achievable accuracy once camera parameters were known. In this work, authors tried to fill this gap by developing and proposing a simple method that, in their preliminary tests, provided promising results. The proposed approach integrates uni- and multivariate statistics to investigate and removing correlated information resident in I.O. parameters of the camera as estimated during BBA.
The study was based on five photogrammetric surveys by RPAS operated in December 2018 and January, February, March and October of 2019 (Table 1). A ground survey campaign was also done to position 30 well distributed GCPs. All flight missions were performed by DJI Inspire 1 drone, equipped with DJI ZenMuse X3 camera. To ensure comparability of datasets all missions were operated according to the same flight plan (path, speed and height) and BBA was performed setting the same parameters within Agisoft PhotoScan. The same skilled user was in charge of processing data and optimizing the photogrammetric solution.
A first step was aimed at testing image blocks quality and removing “bad” images. A second step was aimed at solving BBA iteratively changing the number of GCPs. A total of 155 chunks (31 chunks for each mission) were analyzed. It is worth to remind that this step was not aimed at selecting the best spatial strategy for positioning GCPs; differently, it was devoted to evaluating the achievable accuracy of final measures and the accuracy dependence on GCPs number. As shown in Figure 6 and Figure 7, highest RMSET correspond to those solutions were direct georeferencing plays the leading role, being the number of GCPs lower than the theoretical minimum value (3, [8]). By comparing GCPs and CPs RMSET trend with the number of GCPs it was found that the optimal minimum number of GCPs was around 14 (Figure 6 and Figure 7). All the datasets showed the same GCPs and CPs RMSET trend. Just few dissimilarities were found, probably due to different environmental conditions (e.g., lighting, weather conditions, vegetation phenology). After this preliminary investigation that ensured about comparability of processed datasets, a deeper investigation concerned errors components separately (RMSEj). Some basic statistics (e.g., maximum, minimum, mean and standard deviation) of I.O. parameters estimates were also computed for each processed dataset (Table 10). All of them showed similar statistics. Results confirmed what reported in previous works: Agisoft PhotoScan, as well as other photogrammetric software, in general, cannot estimate stable I.O. parameters while initial conditions (e.g., GCPs number) change [29,59]. In fact, stats showed a high variability of solutions. Nevertheless, the order of magnitude remained the same in all dataset (Table 4) and a similar trend, as shown in the boxplots of Figure 9, Figure 10, Figure 11, Figure 12 and Figure 13. Moreover, the obtained order of magnitude is coherent with the one obtained by other researchers [59]; they showed that Pi parameters are smaller by one or two orders of magnitude than radial ones and that Ki present the most significant deviations.
Correlations among I.O. parameters was then investigated finding a high degree of intracorrelation. Correlation values proved to be consistent with those reported in literature (Table 5, Table 6, Table 7, Table 8 and Table 9 [59]). Correlated information was aggregated by PCA [60]; it was found that from two to five PCs are generally enough to explain the most of variance of I.O. parameters estimates.
With these premises, an index (SI) was defined to summarize the decorrelated information that the first PCs were able to aggregate. SI was assumed as predictor of RMSEj and correspondent predictive function calibrated (Figure 15 and Figure 16). Models calibration was obtained with reference to the December, January, February and October datasets. March dataset was differently used to validate predictive models.
In spite of the few observations used for calibration, the proposed predictive functions showed satisfying results when applied to the validation set generating RMSEj estimates very close to the actual values as computed during BBA by Agisoft PhotoScan. Results are must be assumed as preliminary, but certainly encouraging.

5. Conclusions

The present study was aimed at exploring the impact of the camera I.O. parameters on the accuracy of final photogrammetric 3D models. The possibility of calibrating predictive models for error estimates once given I.O. parameters estimates was the focus point of this research. The proposed procedure was tested on the study area of Torre a Mare (Apulian Region) through the acquisition of five photogrammetric datasets operated at different dates along the year.
After investigating image quality and achieved BBA, resulting camera I.O. parameters were explored to test their eventual intracorrelation and potential relationships with the accuracy of final photogrammetric measures. A high correlation among the most of parameters was found and a high level of information reduction can be achieved by PCA. Tests proved that from two up to five PCs are enough to explain the most of I.O. parameters variance. PCs selection was operated according to the Kaiser’s criterion. The strongest PCs—if properly aggregated along SI—showed that they are able to predict final errors in photogrammetric measures. The SI was, in fact, found to be a good predictor of errors when included, as independent variable, within a 2nd order polynomial function. Predictive functions dependent on SI were applied to all datasets included the validation one of March, obtaining satisfying predicted results as shown in Table 15.
Although the proposed method seems promising and predictive functions estimates satisfying, further investigations must be done, with the man aim of improving generalization capability of models and testing their dependencies on other operational conditions like different areas, oblique acquisitions and GCP spatial distribution.

Author Contributions

Conceptualization: E.B.M. and E.T.; methodology, A.C. and E.T.; software, A.C. and M.S.; validation, E.B.M. and A.C.; formal analysis, A.C. and E.T.; investigation, A.C. and E.B.M.; resources, M.S.; data curation, M.S.; writing—original draft preparation, A.C.; supervision, E.B.M. and E.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors would like to thank the anonymous reviewers for their constructive and valuable suggestions on the earlier drafts of this manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gold, D.P.; Parizek, R.R.; Alexander, S.A. Analysis and application of ERTS-1 data for regional geological mapping. In Proceedings of the First Symposium of Significant Results obtained from the Earth Resources Technology Satellite-1, NASA, New Carrollton, MD, USA, 5–9 March 1973; Volume 1, pp. 231–246. [Google Scholar]
  2. Hooke, J.M.; Horton, B.P.; Moore, J.; Taylor, M.P. Upper river Severn (Caersws) channel study. In Report to the Countryside Council for Wales; University of Portsmouth: Portsmouth, UK, 1994. [Google Scholar]
  3. Evans, I.S. World-wide variations in the direction and concentration of cirque and glacier aspects. Geogr. Ann. Ser. A Phys. Geogr. 1977, 59, 151–175. [Google Scholar] [CrossRef]
  4. Drăguț, L.; Eisank, C. Object representations at multiple scales from digital elevation models. Geomorphology 2011, 129, 183–189. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Capolupo, A.; Pindozzi, S.; Okello, C.; Boccia, L. Indirect field technology for detecting areas object of illegal spills harmful to human health: Application of drones, photogrammetry and hydrological models. Geospat. Heal. 2014, 8, 699. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Medjkane, M.; Maquaire, O.; Costa, S.; Roulland, T.; Letortu, P.; Fauchard, C.; Antoine, R.; Davidson, R. High-resolution monitoring of complex coastal morphology changes: Cross-efficiency of SfM and TLS-based survey (Vaches-Noires cliffs, Normandy, France). Landslides 2018, 15, 1097–1108. [Google Scholar] [CrossRef]
  7. Rieke-Zapp, D.H.; Wegmann, H.; Santel, F.; Nearing, M.A. Digital photogrammetry for measuring soil surface roughness. In Proceedings of the American Society of Photogrammetry & Remote Sensing 2001 Conference–Gateway to the New Millennium’, St Louis, MO, USA, 23–27 April 2001; American Society of Photogrammetry & Remote Sensing: Bethesda, MD, USA, 2001. [Google Scholar]
  8. Nex, F.; Remondino, F. UAV for 3D mapping applications: A review. Appl. Geomatics 2013, 6, 1–15. [Google Scholar] [CrossRef]
  9. Salamí, E.; Barrado, C.; Pastor, E. UAV Flight Experiments Applied to the Remote Sensing of Vegetated Areas. Remote Sens. 2014, 6, 11051–11081. [Google Scholar] [CrossRef] [Green Version]
  10. Capolupo, A.; Pindozzi, S.; Okello, C.; Fiorentino, N.; Boccia, L. Photogrammetry for environmental monitoring: The use of drones and hydrological models for detection of soil contaminated by copper. Sci. Total. Environ. 2015, 514, 298–306. [Google Scholar] [CrossRef]
  11. Manfreda, S.; McCabe, M.F.; Miller, P.E.; Lucas, R.M.; Madrigal, V.P.; Mallinis, G.; Ben Dor, E.; Helman, D.; Estes, L.; Ciraolo, G.; et al. On the Use of Unmanned Aerial Systems for Environmental Monitoring. Remote Sens. 2018, 10, 641. [Google Scholar] [CrossRef] [Green Version]
  12. Capolupo, A.; Nasta, P.; Palladino, M.; Cervelli, E.; Boccia, L.; Romano, N. Assessing the ability of hybrid poplar for in-situ phytoextraction of cadmium by using UAV-photogrammetry and 3D flow simulator. Int. J. Remote. Sens. 2018, 39, 5175–5194. [Google Scholar] [CrossRef]
  13. Palladino, M.; Nasta, P.; Capolupo, A.; Romano, N. Monitoring and modelling the role of phytoremediation to mitigate non-point source cadmium pollution and groundwater contamination at field scale. Ital. J. Agron. 2018, 13, 59–68. [Google Scholar]
  14. Capolupo, A.; Kooistra, L.; Boccia, L. A novel approach for detecting agricultural terraced landscapes from historical and contemporaneous photogrammetric aerial photos. Int. J. Appl. Earth Obs. Geoinf. 2018, 73, 800–810. [Google Scholar] [CrossRef]
  15. Cramer, M.; Przybilla, H.-J.; Zurhorst, A. UAV cameras: Overview and geometric calibration benchmark. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 42, 85–92. [Google Scholar] [CrossRef] [Green Version]
  16. Kraft, T.; Geßner, M.; Meißner, H.; Przybilla, H.J.; Gerke, M. Introduction of a photogrammetric camera system for rpas with highly accurate gnss/imu information for standardized workflows. In Proceedings of the EuroCOW 2016, the European Calibration and Orientation Workshop (Volume XL-3/W4), Lausanne, Switzerland, 10–12 February 2016; pp. 71–75. [Google Scholar]
  17. Caroti, G.; Piemonte, A.; Nespoli, R. UAV-Borne photogrammetry: A low cost 3D surveying methodology for cartographic update. In Proceedings of the MATEC Web of Conferences, Seoul, South Korea, 22–25 August 2017; EDP Sciences: Les Ulis, France, 2017; Volume 120, p. 9005. [Google Scholar]
  18. Saponaro, M.; Capolupo, A.; Tarantino, E.; Fratino, U. Comparative Analysis of Different UAV-Based Photogrammetric Processes to Improve Product Accuracies; Lecture Notes in Computer Science; Springer Science and Business Media LLC: Berlin/Heidelberg, Germany, 2019; pp. 225–238. [Google Scholar]
  19. Mesas-Carrascosa, F.-J.; Torres-Sánchez, J.; Rumbao, I.C.; García-Ferrer, A.; Peña-Barragan, J.M.; Borra-Serrano, I.; López-Granados, F. Assessing Optimal Flight Parameters for Generating Accurate Multispectral Orthomosaicks by UAV to Support Site-Specific Crop Management. Remote Sens. 2015, 7, 12793–12814. [Google Scholar] [CrossRef] [Green Version]
  20. Saponaro, M.; Tarantino, E.; Reina, A.; Furfaro, G.; Fratino, U. Assessing the Impact of the Number of GCPS on the Accuracy of Photogrammetric Mapping from UAV Imagery. Baltic Surv. 2019, 10, 43–51. [Google Scholar]
  21. Padró, J.-C.; Muñoz, F.-J.; Planas, J.; Pons, X. Comparison of four UAV georeferencing methods for environmental monitoring purposes focusing on the combined use with airborne and satellite remote sensing platforms. Int. J. Appl. Earth Obs. Geoinf. 2019, 75, 130–140. [Google Scholar] [CrossRef]
  22. Rangel, J.M.G.; Gonçalves, G.; Pérez, J.A. The impact of number and spatial distribution of GCPs on the positional accuracy of geospatial products derived from low-cost UASs. Int. J. Remote Sens. 2018, 39, 7154–7171. [Google Scholar] [CrossRef]
  23. Lumban-Gaol, Y.A.; Murtiyoso, A.; Nugroho, B.H. Investigations on The Bundle Adjustment Results From Sfm-Based Software For Mapping Purposes. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, 42, 623–628. [Google Scholar] [CrossRef] [Green Version]
  24. Benassi, F.; Dall’Asta, E.; Diotri, F.; Forlani, G.; Di Cella, U.M.; Roncella, R.; Santise, M. Testing Accuracy and Repeatability of UAV Blocks Oriented with GNSS-Supported Aerial Triangulation. Remote Sens. 2017, 9, 172. [Google Scholar] [CrossRef] [Green Version]
  25. Salvi, J.; Armangué, X.; Batlle, J. A comparative review of camera calibrating methods with accuracy evaluation. Pattern Recognit. 2002, 35, 1617–1635. [Google Scholar] [CrossRef] [Green Version]
  26. Remondino, F.; Fraser, C. Digital camera calibration methods: Considerations and comparisons. Int. Arch. Photogramm. Remote Sens. 2006, 36, 266–272. [Google Scholar]
  27. Warner, W.S.; Carson, W.W. Improving Interior Orientation For Asmall Standard Camera. Photogramm. Rec. 2006, 13, 909–916. [Google Scholar] [CrossRef]
  28. Perez, M.; Agüera, F.; Carvajal, F. Digital camera calibration using images taken from an unmanned aerial vehicle. ISPRS Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 2012, 1, 167–171. [Google Scholar] [CrossRef] [Green Version]
  29. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef] [Green Version]
  30. Pix4D—Drone Mapping Software. Swiss Federal Institute of Technology Lausanne, Route Cantonale, Switzerland. Available online: https://pix4d.com/ (accessed on 14 November 2014).
  31. Pierrot, D.M.; Clery, I. Apero, An open source bundle adjustment software for automatic calibration and orientation of set of images. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXVIII–5/W16, Proceedings of ISPRS Workshop, Trento, Italy, 2–4 March 2011. [Google Scholar]
  32. González-Aguilera, D.; López-Fernández, L.; Rodríguez-Gonzálvez, P.; Hernández-López, D.; Guerrero, D.; Remondino, F.; Menna, F.; Nocerino, E.; Toschi, I.; Ballabeni, A.; et al. GRAPHOS—Open-source software for photogrammetric applications. Photogramm. Rec. 2018, 33, 11–29. [Google Scholar] [CrossRef] [Green Version]
  33. VisualSFM. Available online: http://www.cs.washington.edu/homes/ccwu/vsfm/ (accessed on 18 May 2013).
  34. Rupnik, E.; Daakir, M.; Deseilligny, M.P. MicMac—A free, open-source solution for photogrammetry. Open Geospat. Data Softw. Stand. 2017, 2, 1–9. [Google Scholar] [CrossRef]
  35. Ballarin, M. Fotogrammetria Aerea Low Cost in Archeologia. Ph.D. Thesis, Politecnico di Milano, Milano, Italy, December 2014. [Google Scholar]
  36. Oniga, V.-E.; Pfeifer, N.; Loghin, A.-M. 3D Calibration Test-Field for Digital Cameras Mounted on Unmanned Aerial Systems (UAS). Remote. Sens. 2018, 10, 2017. [Google Scholar] [CrossRef] [Green Version]
  37. De Lucia, A. La Comunità neolitica di Cala Colombo presso Torre a mare, Bari contributo del Gruppo interdisciplinare di storia delle civiltà antiche dell’Università degli studi di Bari; Società di storia patria per la Puglia: Bari BA, Italy, 1977. [Google Scholar]
  38. Geniola, A. La Comunità Neolitica di cala Colombo Presso Torre a Mare (Bari). Archeologia e Cultura. Rivista di Antropologia; Istituto Italiano di Antropologia: Roma, Italy, 1974; Volume 59, pp. 189–275. [Google Scholar]
  39. ENAC. Regolamento Mezzi Aerei a Pilotaggio Remoto, 3rd ed.; ENAC, Ed.; Italian Civil Aviation Authority (ENAC): Rome, Italy, 2019.
  40. Stöcker, C.; Bennett, R.; Nex, F.; Gerke, M.; Zevenbergen, J. Review of the Current State of UAV Regulations. Remote. Sens. 2017, 9, 459. [Google Scholar] [CrossRef] [Green Version]
  41. DJI. Dà-Jiāng Innovations. 2020. Available online: https=//www.dji.com (accessed on 11 December 2019).
  42. Dandois, J.P.; Olano, M.; Ellis, E.C. Optimal Altitude, Overlap, and Weather Conditions for Computer Vision UAV Estimates of Forest Structure. Remote. Sens. 2015, 7, 13895–13920. [Google Scholar] [CrossRef] [Green Version]
  43. Gagliolo, S.; Fagandini, R.; Passoni, D.; Federici, B.; Ferrando, I.; Pagliari, D.; Pinto, L.; Sguerso, D. Parameter optimization for creating reliable photogrammetric models in emergency scenarios. Appl. Geomat. 2018, 10, 501–514. [Google Scholar] [CrossRef]
  44. Beretta, F.; Shibata, H.; Cordova, R.; Peroni, R.L.; Azambuja, J.; Costa, J.F.C.L. Topographic modelling using UAVs compared with traditional survey methods in mining. REM Int. Eng. J. 2018, 71, 463–470. [Google Scholar] [CrossRef]
  45. Barazzetti, L.; Scaioni, M.; Remondino, F. Orientation and 3D modelling from markerless terrestrial images: Combining accuracy with automation. Photogramm. Rec. 2010, 25, 356–381. [Google Scholar] [CrossRef]
  46. Yao, H.; Qin, R.; Chen, X. Unmanned Aerial Vehicle for Remote Sensing Applications—A Review. Remote Sens. 2019, 11, 1443. [Google Scholar] [CrossRef] [Green Version]
  47. Saponaro, M.; Tarantino, E.; Fratino, U. Generation of 3D surface models from UAV imagery varying flight patterns and processing parameters. CENTRAL Eur. Symp. Thermophys. 2019 (CEST) 2019, 2116, 280009. [Google Scholar] [CrossRef]
  48. Mayer, C.; Pereira, L.M.G.; Kersten, T.P. A Comprehensive Workflow to Process UAV Images for the Efficient Production of Accurate Geo-information. In Proceedings of the IX National Conference on Cartography and Geodesy, Amadora, Portugal, 25–26 October 2018. [Google Scholar]
  49. Agisoft, L.L.C. Agisoft Photoscan User Manual, Professional ed.; Agisoft LLC: St Petersburg, Russia, 2014. [Google Scholar]
  50. James, M.R.; Robson, S.; D’Oleire-Oltmanns, S.; Niethammer, U. Optimising UAV topographic surveys processed with structure-from-motion: Ground control quality, quantity and bundle adjustment. Geomorphology 2017, 280, 51–66. [Google Scholar] [CrossRef] [Green Version]
  51. Daakir, M.; Pierrot-Deseilligny, M.; Bosser, P.; Pichard, F.; Thom, C.; Rabot, Y. Study of lever-arm effect using embedded photogrammetry and on-board gps receiver on uav for metrological mapping purpose and proposal of a free ground measurements calibration procedure. In ISPRS Annals of Photogrammetry, Remote Sensing & Spatial Information Sciences; International Society for Photogrammetry and Remote Sensing: Lausanne, Switzerland, 2016. [Google Scholar]
  52. Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  53. Lowe, G. SIFT-The Scale Invariant Feature Transform. Int. J. 2004, 2, 91–110. [Google Scholar]
  54. Gruen, A.; Beyer, H.A. System calibration through self-calibration. In Calibration and Orientation of Cameras in Computer Vision; Springer: Berlin/Heidelberg, Germany, 2001; pp. 163–193. [Google Scholar]
  55. Triggs, B.; Mclauchlan, P.F.; Hartley, R.; FitzGibbon, A.W. Bundle Adjustment—A Modern Synthesis. In Lecture Notes in Computer Science; Springer Science and Business Media LLC: Berlin/Heidelberg, Germany, 2000; Volume 1883, pp. 298–372. [Google Scholar]
  56. Saponaro, M.; Tarantino, E.; Fratino, U. Geometric Accuracy Evaluation of Geospatial Data Using Low-Cost Sensors on Small UAVs. In Proceedings of the Lecture Notes in Computer Science; Springer Science and Business Media LLC: Berlin/Heidelberg, Germany, 2018; pp. 364–374. [Google Scholar]
  57. Villanueva, J.K.S.; Blanco, A.C. Optimization of ground control point (gcp) configuration for unmanned aerial vehicle (uav) survey using structure from motion (SFM). ISPRS Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 2019, 167–174. [Google Scholar] [CrossRef] [Green Version]
  58. Agüera-Vega, F.; Carvajal-Ramírez, F.; Martínez-Carricondo, P. Assessment of photogrammetric mapping accuracy based on variation ground control points number using unmanned aerial vehicle. Measurement 2017, 98, 221–227. [Google Scholar] [CrossRef]
  59. Smith, D.; Heidemann, H.K. New standard for new era: Overview of the 2015 ASPRS positional accuracy standards for digital geospatial data. Photogramm. Eng. Remote Sens. 2015, 81, 173–176. [Google Scholar]
  60. Fraser, C.S. Automatic Camera Calibration in Close Range Photogrammetry. Photogramm. Eng. Remote Sens. 2013, 79, 381–388. [Google Scholar] [CrossRef] [Green Version]
  61. Griffiths, D.; Burningham, H. Comparison of pre- and self-calibrated camera calibration models for UAS-derived nadir imagery for a SfM application. Prog. Phys. Geogr. Earth Environ. 2018, 43, 215–235. [Google Scholar] [CrossRef] [Green Version]
  62. Duane, C.B. Close-range camera calibration. Photogramm. Eng. 1971, 37, 855–866. [Google Scholar]
  63. Fryer, J.G.; Brown, D.C. Lens distortion for close-range photogrammetry. Photogrammetric Engineering and Remote Sensing; Geodetic Services, Inc.: Melbourne, FL, USA, 1986; Volume 52, pp. 51–58. [Google Scholar]
  64. Chandler, J.H.; Fryer, J.G.; Jack, A. Metric capabilities of low-cost digital cameras for close range surface measurement. Photogramm. Rec. 2005, 20, 12–26. [Google Scholar] [CrossRef]
  65. Fryer, J.G. Camera calibration. In Close Range Photogrammetry and Machine Vision; Atkinson, K.B., Ed.; Wittles Publishing: Caithness, Scotland, 1996; pp. 156–179. [Google Scholar]
  66. Chambers, J. Software for Data Analysis; Springer Science and Business Media LLC: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  67. Abdi, H.; Williams, L.J. Principal component analysis. In Wiley Interdisciplinary Reviews: Computational Statistics; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2010; Volume 2, pp. 433–459. [Google Scholar] [CrossRef]
  68. Wold, S.; Esbensen, K.; Geladi, P. Principal component analysis. Chemom. Intell. Lab. Syst. 1987, 2, 37–52. [Google Scholar] [CrossRef]
  69. Abdi, H. Singular value decomposition (SVD) and generalized singular value decomposition. In Encyclopedia of Measurement and Statistics; Salkind, N., Ed.; Sage: Thousand Oaks, CA, USA, 2007; pp. 907–912. [Google Scholar]
  70. Kaiser, H.F. The Application of Electronic Computers to Factor Analysis. Educ. Psychol. Meas. 1960, 20, 141–151. [Google Scholar] [CrossRef]
  71. Mutanga, O.; Skidmore, A.K.; Kumar, L.; Ferwerda, J. Estimating tropical pasture quality at canopy level using band depth analysis with continuum removal in the visible domain. Int. J. Remote. Sens. 2005, 26, 1093–1108. [Google Scholar] [CrossRef]
Figure 1. Study area. Colored orthophoto was generated during this work from one of the available datasets. In red, ground control points (GCPs) locations. They were surveyed by network real time kinematic (NRTK) Global Navigation Satellite System (GNSS) and georeferenced in the RDN2008/UTM Zone 33N (NE) reference frame (EPSG: 6708). In the background (grayscale) an orthophoto dated 2016 is shown as provided by the WMS Service of SIT Puglia.
Figure 1. Study area. Colored orthophoto was generated during this work from one of the available datasets. In red, ground control points (GCPs) locations. They were surveyed by network real time kinematic (NRTK) Global Navigation Satellite System (GNSS) and georeferenced in the RDN2008/UTM Zone 33N (NE) reference frame (EPSG: 6708). In the background (grayscale) an orthophoto dated 2016 is shown as provided by the WMS Service of SIT Puglia.
Remotesensing 12 02674 g001
Figure 2. Details of the coast stretch. (a) Example of a collapsed area mainly due to an undermined foot of the rock complex; (b) subhorizontal rocky surface characterized by visible fractures.
Figure 2. Details of the coast stretch. (a) Example of a collapsed area mainly due to an undermined foot of the rock complex; (b) subhorizontal rocky surface characterized by visible fractures.
Remotesensing 12 02674 g002
Figure 3. Flight mission details: covered area (blue), RPAS flight path (yellow). “S” and “H” indicate the “starting point” of the mission and the “home station point”, respectively.
Figure 3. Flight mission details: covered area (blue), RPAS flight path (yellow). “S” and “H” indicate the “starting point” of the mission and the “home station point”, respectively.
Remotesensing 12 02674 g003
Figure 4. Operational workflow. RPAS—remotely piloted aerial systems; progressive cross-validation; PCA—principal component analysis.
Figure 4. Operational workflow. RPAS—remotely piloted aerial systems; progressive cross-validation; PCA—principal component analysis.
Remotesensing 12 02674 g004
Figure 5. Operational workflow. GCPs—ground control points; DEC—December; JAN—January; FEB—February; MAR—March; OCT—October.
Figure 5. Operational workflow. GCPs—ground control points; DEC—December; JAN—January; FEB—February; MAR—March; OCT—October.
Remotesensing 12 02674 g005
Figure 6. RMSET (ground control points (GCPs)) vs. GCPs number. Red line: January; yellow line: March; dark blue line: December; light blue line: October; green line: February.
Figure 6. RMSET (ground control points (GCPs)) vs. GCPs number. Red line: January; yellow line: March; dark blue line: December; light blue line: October; green line: February.
Remotesensing 12 02674 g006
Figure 7. RMSET (check points (CPs)) vs. GCPs number. Red line: January; yellow line: March; dark blue line: December; light blue line: October; green line: February.
Figure 7. RMSET (check points (CPs)) vs. GCPs number. Red line: January; yellow line: March; dark blue line: December; light blue line: October; green line: February.
Remotesensing 12 02674 g007
Figure 8. Coefficients of variation (CV, %) computed for all I.O. parameters for all the processed datasets.
Figure 8. Coefficients of variation (CV, %) computed for all I.O. parameters for all the processed datasets.
Remotesensing 12 02674 g008
Figure 9. Boxplot of I.O. parameters from the December dataset. f—focal length; xp and yp coordinates of the principal point offset; B1, B2—skew parameters; K1, K2, K3, K4—radial distortions; P1, P2, P3, P4—components of the decentering distortions.
Figure 9. Boxplot of I.O. parameters from the December dataset. f—focal length; xp and yp coordinates of the principal point offset; B1, B2—skew parameters; K1, K2, K3, K4—radial distortions; P1, P2, P3, P4—components of the decentering distortions.
Remotesensing 12 02674 g009
Figure 10. Boxplot of I.O. parameters from the January dataset. f—focal length; xp and yp coordinates of the principal point offset; B1, B2—skew parameters; K1, K2, K3, K4—radial distortions; P1, P2, P3, P4—components of the decentering distortions.
Figure 10. Boxplot of I.O. parameters from the January dataset. f—focal length; xp and yp coordinates of the principal point offset; B1, B2—skew parameters; K1, K2, K3, K4—radial distortions; P1, P2, P3, P4—components of the decentering distortions.
Remotesensing 12 02674 g010
Figure 11. Boxplot of I.O. parameters from the February dataset. f—focal length; xp and yp coordinates of the principal point offset; B1, B2—skew parameters; K1, K2, K3, K4—radial distortions; P1, P2, P3, P4—components of the decentering distortions.
Figure 11. Boxplot of I.O. parameters from the February dataset. f—focal length; xp and yp coordinates of the principal point offset; B1, B2—skew parameters; K1, K2, K3, K4—radial distortions; P1, P2, P3, P4—components of the decentering distortions.
Remotesensing 12 02674 g011
Figure 12. Boxplot of I.O. parameters from the March dataset. f—focal length; xp and yp coordinates of the principal point offset; B1, B2—skew parameters; K1, K2, K3, K4—radial distortions; P1, P2, P3, P4—components of the decentering distortions.
Figure 12. Boxplot of I.O. parameters from the March dataset. f—focal length; xp and yp coordinates of the principal point offset; B1, B2—skew parameters; K1, K2, K3, K4—radial distortions; P1, P2, P3, P4—components of the decentering distortions.
Remotesensing 12 02674 g012
Figure 13. Boxplot of I.O. parameters from the October dataset. f—focal length; xp and yp coordinates of the principal point offset; B1, B2—skew parameters; K1, K2, K3, K4—radial distortions; P1, P2, P3, P4—components of the decentering distortions.
Figure 13. Boxplot of I.O. parameters from the October dataset. f—focal length; xp and yp coordinates of the principal point offset; B1, B2—skew parameters; K1, K2, K3, K4—radial distortions; P1, P2, P3, P4—components of the decentering distortions.
Remotesensing 12 02674 g013
Figure 14. Correlation plot between I.O. parameters and PC (Dimi). (a) December; (b) January; (c) February; (d) October. (f—focal length; xp and yp coordinates of the principal point offset; B1, B2—skew parameters; K1, K2, K3, K4—radial distortions; P1, P2, P3, P4—components of the decentering distortions).
Figure 14. Correlation plot between I.O. parameters and PC (Dimi). (a) December; (b) January; (c) February; (d) October. (f—focal length; xp and yp coordinates of the principal point offset; B1, B2—skew parameters; K1, K2, K3, K4—radial distortions; P1, P2, P3, P4—components of the decentering distortions).
Remotesensing 12 02674 g014
Figure 15. Significant predictive functions of RMSEE, RMSEN calculated on GCPs. R2 is the coefficient of determination.
Figure 15. Significant predictive functions of RMSEE, RMSEN calculated on GCPs. R2 is the coefficient of determination.
Remotesensing 12 02674 g015
Figure 16. Significant predictive functions of RMSEN, RMSEH, RMSET calculated on CPs. R2 is the coefficient of determination.
Figure 16. Significant predictive functions of RMSEN, RMSEH, RMSET calculated on CPs. R2 is the coefficient of determination.
Remotesensing 12 02674 g016
Table 1. Photogrammetric datasets acquired during the five remotely piloted aerial systems (RPAS) campaigns and related ground sample distance (GSD) values.
Table 1. Photogrammetric datasets acquired during the five remotely piloted aerial systems (RPAS) campaigns and related ground sample distance (GSD) values.
Acquisition Date#N ImagesGSD (M/Pix)
December 12, 2018770.041
January 8, 2019770.047
February 19, 2019770.048
March 16, 2019770.041
October 16, 2019770.042
Table 2. Agisoft PhotoScan software parameters used during processing of photogrammetric datasets.
Table 2. Agisoft PhotoScan software parameters used during processing of photogrammetric datasets.
Agisoft PhotoScan ParameterValue
Coordinate systemRDN2008/UTM Zone 33N (NE) (EPSG: 6708)
Initial principal point position (Xp, Yp)(0, 0)
Camera positioning accuracy3 m
Camera accuracy, attitude10 deg
3D marker accuracy (object space)0.02 m
Marker accuracy (image space)0.5 pixel
GPS/INS offset vector value Δ x = 0.005 ± 0.002   m
Δ y = 0.100 ± 0.01   m
Δ z = 0.250 ± 0.01   m
Table 3. List of markers chosen and added as control points (CPs) in each implementation.
Table 3. List of markers chosen and added as control points (CPs) in each implementation.
#GCPLabel#GCPLabel
29 GCPs3R002414 GCPS3R0019
28 GCPs3R002613 GCPS3R0031
27 GCPs3R003012 GCPS3R0017
26 GCPs3R001811 GCPS3R0027
25 GCPs3R000410 GCPS3R0025
24 GCPs3R00139 GCPS3R0022
23 GCPs3R00108 GCPS3R0014
22 GCPs3R00057 GCPS3R0007
21 GCPs3R00206 GCPS3R0023
20 GCPs3R00215 GCPS3R0015
19 GCPs3R00294 GCPS3R0003
18 GCPs3R00113 GCPS3R0012
17 GCPs3R00012 GCPS3R0016
16 GCPs3R00081 GCP3R0028
15 GCPs3R00020 GCP3R0009
Table 4. Statistics of I.O. parameters estimates computed with respect to the 31 repetitions for the 5 processed datasets (December 2018, January 2019, February 2019, March 2019, October 2019); Stat—statistic; Max—maximum; Min—minimum; SD—standard deviation; f—focal length; xp and yp coordinates of principal point; B1; B2—skew coefficients; K1, K2, K3, K4 —radial distortion coefficients; P1, P2, P3, P4—decentering distortion coefficients.
Table 4. Statistics of I.O. parameters estimates computed with respect to the 31 repetitions for the 5 processed datasets (December 2018, January 2019, February 2019, March 2019, October 2019); Stat—statistic; Max—maximum; Min—minimum; SD—standard deviation; f—focal length; xp and yp coordinates of principal point; B1; B2—skew coefficients; K1, K2, K3, K4 —radial distortion coefficients; P1, P2, P3, P4—decentering distortion coefficients.
SurveyStat.f (pix)xp (pix)yp (pix)B1B2K1K2K3K4P1P2P3P4
Dec.Max2366.210−0.1906.8502.7200.160−0.13000.1400−0.03000.01400.0004−0.0002−0.09000.3100
Min2221.830−3.2003.480−0.730−0.960−0.14000.1100−0.05000.0084−0.0004−0.0008−0.49000.1900
Mean2285.310−2.3104.8800.800−0.440−0.14000.1200−0.04000.01060.0003−0.0005−0.37000.2700
SD31.5700.8200.7700.9900.3100.00000.01000.00300.00120.00020.00010.08000.0400
Jan.Max2358.480−1.4705.8202.6901.210−0.13000.1400−0.03720.01430.0007−0.0003−0.00180.3300
Min2262.760−4.6604.350−0.030−1.270−0.14000.1200−0.04860.0100−0.0001−0.0006−0.5856−0.0300
Mean2319.960−3.6805.1800.930−0.540−0.14000.1300−0.04330.01220.0005−0.0005−0.47030.2700
SD18.7900.9300.3400.8500.5300.00000.00000.00230.00090.00020.00010.15480.0900
Feb.Max2366.490−1.5904.4503.2500.010−0.14000.1400−0.04000.01000.0004−0.00020.19000.3300
Min2310.470−2.8303.4600.130−1.160−0.15000.1200−0.05000.0100−0.0001−0.0004−0.3900−0.0400
Mean2339.330−2.1903.7801.150−0.480−0.14000.1300−0.04000.01000.0003−0.0003−0.25000.2300
SD11.7110.3000.2000.8700.2800.00000.00300.00100.00050.00010.00010.15500.0940
Mar.Max2320.6402.7307.0402.0401.300−0.13000.1300−0.04000.01000.0002−0.00010.36000.5100
Min2267.750−1.5501.960−2.690−1.420−0.14000.1200−0.04000.0100−0.0007−0.0006−0.48000.0100
Mean2305.520−0.0703.2900.450−0.590−0.14000.1300−0.04000.01000.0000−0.00020.03000.3300
SD16.8001.0701.6601.3800.7900.00000.00000.00160.00060.00030.00010.23000.1700
Oct.Max2363.460−0.4904.7601.6800.180−0.14000.1400−0.04000.01500.0003−0.0003−0.18600.2910
Min2275.080−2.4502.940−1.220−2.040−0.16000.1200−0.05000.0110−0.0003−0.0007−0.47200.1190
Mean2338.246−1.6503.7900.220−0.990−0.14000.1340−0.04700.01400.0002−0.0004−0.30100.2260
SD26.2930.5300.5200.7500.5700.00000.00600.00300.00100.00020.00010.08300.0630
Table 5. Correlation matrix of camera I.O. parameters as estimated from the December 2018 dataset (f—focal length; xp and yp coordinates of the principal point offset; B1, B2—skew parameters; K1, K2, K3, K4—radial distortions; P1, P2, P3, P4—components of the decentering distortions).
Table 5. Correlation matrix of camera I.O. parameters as estimated from the December 2018 dataset (f—focal length; xp and yp coordinates of the principal point offset; B1, B2—skew parameters; K1, K2, K3, K4—radial distortions; P1, P2, P3, P4—components of the decentering distortions).
f (pix)xp
(pix)
yp
(pix)
B1B2K1K2K3K4P1P2P3P4
f (pix)1.00−0.15−0.560.100.03−1.001.00−1.000.990.270.510.370.44
xp (pix)−0.151.000.51−0.26−0.040.12−0.120.08−0.07−0.96−0.580.43−0.73
yp (pix)−0.560.511.00−0.490.290.54−0.550.53−0.53−0.69−0.91−0.48−0.43
B10.10−0.26−0.491.00−0.59−0.060.12−0.080.100.350.200.44−0.17
B20.03−0.040.29−0.591.00−0.040.02−0.040.03−0.12−0.02−0.370.12
K1−1.000.120.54−0.06−0.041.00−1.001.00−1.00−0.23−0.51−0.38−0.42
K21.00−0.12−0.550.120.02−1.001.00−1.001.000.240.500.400.41
K3−1.000.080.53−0.08−0.041.00−1.001.00−1.00−0.20−0.49−0.42−0.38
K40.99−0.07−0.530.100.03−1.001.00−1.001.000.190.480.430.37
P10.27−0.96−0.690.35−0.12−0.230.24−0.200.191.000.68−0.260.78
P20.51−0.58−0.910.20−0.02−0.510.50−0.490.480.681.000.310.49
P30.370.43−0.480.44−0.37−0.380.40−0.420.43−0.260.311.00−0.46
P40.44−0.73−0.43−0.170.12−0.420.41−0.380.370.780.49−0.461.00
Table 6. Correlation matrix of camera I.O. parameters as estimated from the January 2019 dataset (f—focal length; xp and yp coordinates of the principal point offset; B1, B2—skew parameters; K1, K2, K3, K4—radial distortions; P1, P2, P3, P4—components of the decentering distortions).
Table 6. Correlation matrix of camera I.O. parameters as estimated from the January 2019 dataset (f—focal length; xp and yp coordinates of the principal point offset; B1, B2—skew parameters; K1, K2, K3, K4—radial distortions; P1, P2, P3, P4—components of the decentering distortions).
f
(pix)
xp
(pix)
yp
(pix)
B1B2K1K2K3K4P1P2P3P4
f (pix)10.51−0.30−0.270.59−1.001.00−0.990.98−0.52−0.170.55−0.52
xp (pix)0.511−0.46−0.020.72−0.520.57−0.630.65−0.98−0.360.95−0.92
yp (pix)−0.30−0.4610.43−0.250.32−0.300.32−0.320.35−0.37−0.550.44
B1−0.27−0.020.431−0.390.27−0.250.29−0.290.080.04−0.240.18
B20.590.72−0.25−0.391−0.600.64−0.700.72−0.81−0.210.83−0.87
K1−1.00−0.520.320.27−0.601−1.000.99−0.980.540.15−0.570.54
K21.000.57−0.30−0.250.64−1.001−0.990.99−0.58−0.190.60−0.58
K3−0.99−0.630.320.29−0.700.99−0.991−1.000.650.23−0.670.65
K40.980.65−0.32−0.290.72−0.980.99−1.001−0.67−0.250.69−0.67
P1−0.52−0.980.350.08−0.810.54−0.580.65−0.6710.43−0.950.95
P2−0.17−0.36−0.370.04−0.210.15−0.190.23−0.250.431−0.240.22
P30.550.95−0.55−0.240.83−0.570.60−0.670.69−0.95−0.241−0.98
P4−0.52−0.920.440.18−0.870.54−0.580.65−0.670.950.22−0.981
Table 7. Correlation matrix of camera I.O. parameters as estimated from the February 2019 dataset (f—focal length; xp and yp coordinates of the principal point offset; B1, B2—skew parameters; K1, K2, K3, K4—radial distortions; P1, P2, P3, P4—components of the decentering distortions).
Table 7. Correlation matrix of camera I.O. parameters as estimated from the February 2019 dataset (f—focal length; xp and yp coordinates of the principal point offset; B1, B2—skew parameters; K1, K2, K3, K4—radial distortions; P1, P2, P3, P4—components of the decentering distortions).
f
(pix)
xp
(pix)
yp
(pix)
B1B2K1K2K3K4P1P2P3P4
f (pix)10.0080.180.16−0.16−0.880.99−0.930.970.14−0.15−0.120.19
xp (pix)0.011−0.510.18−0.34−0.350.12−0.270.20−0.770.260.74−0.56
yp (pix)0.18−0.5110.480.08−0.090.18−0.140.180.18−0.40−0.330.19
B10.160.180.481−0.32−0.140.20−0.160.19−0.040.16−0.170.22
B2−0.16−0.340.08−0.3210.15−0.180.18−0.200.05−0.01−0.01−0.03
K1−0.88−0.35−0.09−0.140.151−0.930.99−0.950.270.09−0.270.17
K20.990.120.180.20−0.18−0.931−0.970.990.01−0.14−0.010.08
K3−0.93−0.27−0.14−0.160.180.99−0.971−0.990.190.14−0.180.10
K40.970.200.180.19−0.20−0.950.99−0.991−0.09−0.160.08−0.01
P10.14−0.770.18−0.040.050.270.010.19−0.0910.11−0.940.90
P2−0.150.26−0.400.16−0.010.09−0.140.14−0.160.111−0.270.43
P3−0.120.74−0.33−0.17−0.01−0.27−0.01−0.180.08−0.94−0.271−0.97
P40.19−0.560.190.22−0.030.170.080.10−0.010.900.43−0.971
Table 8. Correlation matrix of camera I.O. parameters as estimated from the March 2019 dataset (f—focal length; xp and yp coordinates of the principal point offset; B1, B2—skew parameters; K1, K2, K3, K4—radial distortions; P1, P2, P3, P4—components of the decentering distortions).
Table 8. Correlation matrix of camera I.O. parameters as estimated from the March 2019 dataset (f—focal length; xp and yp coordinates of the principal point offset; B1, B2—skew parameters; K1, K2, K3, K4—radial distortions; P1, P2, P3, P4—components of the decentering distortions).
f
(pix)
xp
(pix)
yp
(pix)
B1B2K1K2K3K4P1P2P3P4
f (pix)1.000.04−0.700.13−0.40−0.951.00−0.940.960.530.770.660.78
xp (pix)0.041.000.49−0.940.76−0.310.10−0.320.28−0.73−0.30−0.380.06
yp (pix)−0.700.491.00−0.660.850.46−0.640.43−0.47−0.94−0.93−0.93−0.81
B10.13−0.94−0.661.00−0.900.160.060.18−0.130.850.500.580.12
B2−0.400.760.85−0.901.000.14−0.340.10−0.14−0.91−0.70−0.78−0.45
K1−0.95−0.310.460.160.141.00−0.971.00−0.99−0.25−0.60−0.45−0.69
K21.000.10−0.640.06−0.34−0.971.00−0.970.980.460.730.610.75
K3−0.94−0.320.430.180.101.00−0.971.00−1.00−0.22−0.57−0.42−0.65
K40.960.28−0.47−0.13−0.14−0.990.98−1.001.000.270.600.460.67
P10.53−0.73−0.940.85−0.91−0.250.46−0.220.271.000.850.840.61
P20.77−0.30−0.930.50−0.70−0.600.73−0.570.600.851.000.850.81
P30.66−0.38−0.930.58−0.78−0.450.61−0.420.460.840.851.000.78
P40.780.06−0.810.12−0.45−0.690.75−0.650.670.610.810.781.00
Table 9. Correlation matrix of camera I.O. parameters as estimated from the October 2019 dataset (f—focal length; xp and yp coordinates of the principal point offset; B1, B2—skew parameters; K1, K2, K3, K4—radial distortions; P1, P2, P3, P4—components of the decentering distortions).
Table 9. Correlation matrix of camera I.O. parameters as estimated from the October 2019 dataset (f—focal length; xp and yp coordinates of the principal point offset; B1, B2—skew parameters; K1, K2, K3, K4—radial distortions; P1, P2, P3, P4—components of the decentering distortions).
f
(pix)
xp
(pix)
yp
(pix)
B1B2K1K2K3K4P1P2P3P4
f (pix)1.00−0.63−0.50−0.26−0.26−1.001.00−1.001.000.700.51−0.540.83
xp (pix)−0.631.000.28−0.280.300.58−0.630.60−0.62−0.91−0.660.73−0.65
yp (pix)−0.500.281.000.010.130.46−0.490.47−0.48−0.63−0.77−0.28−0.43
B1−0.26−0.280.011.00−0.640.31−0.260.28−0.270.270.420.12−0.41
B2−0.260.300.13−0.641.000.23−0.270.25−0.26−0.35−0.380.100.04
K1−1.000.580.460.310.231.00−1.001.00−1.00−0.65−0.460.53−0.81
K21.00−0.63−0.49−0.26−0.27−1.001.00−1.001.000.700.50−0.540.82
K3−1.000.600.470.280.251.00−1.001.00−1.00−0.67−0.480.54−0.82
K41.00−0.62−0.48−0.27−0.26−1.001.00−1.001.000.690.49−0.550.82
P10.70−0.91−0.630.27−0.35−0.650.70−0.670.691.000.83−0.470.71
P20.51−0.66−0.770.42−0.38−0.460.50−0.480.490.831.00−0.040.32
P3−0.540.73−0.280.120.100.53−0.540.54−0.55−0.47−0.041.00−0.65
P40.83−0.65−0.43−0.410.04−0.810.82−0.820.820.710.32−0.651.00
Table 10. Statistics about error components computed with reference to GCPs and CPs RMSEj for all tested datasets (Dec.—December; Jan.—January; Feb.—February; Mar.—March; Oct.—October; E—East coordinate; N—North coordinate; H—height coordinate; T—3D error; I—positioning error in the image space; Max; Min—minimum; SD—standard deviation).
Table 10. Statistics about error components computed with reference to GCPs and CPs RMSEj for all tested datasets (Dec.—December; Jan.—January; Feb.—February; Mar.—March; Oct.—October; E—East coordinate; N—North coordinate; H—height coordinate; T—3D error; I—positioning error in the image space; Max; Min—minimum; SD—standard deviation).
SurveyStatisticsGCPsCPs
RMSEE
(m)
RMSEN
(m)
RMSEH
(m)
RMSET
(m)
RMSEI
(pix)
RMSEE
(m)
RMSEN
(m)
RMSEH
(m)
RMSET
(m)
RMSEI
(pix)
Dec.Max0.200.090.960.980.270.210.090.930.960.27
Min0.000.000.000.000.180.010.020.000.030.19
Mean0.040.020.090.110.250.070.040.140.170.25
SD0.040.020.230.230.030.050.020.270.270.03
Jan.Max0.110.0900.450.470.270.1300.1090.4210.4530.311
Min0.0020.00120.000180.00230.200.0110.0200.0100.0260.220
Mean0.0350.0250.0440.0680.240.0520.0430.0700.1040.237
SD0.0190.0160.100.100.0160.0350.0260.1190.1210.019
Feb.Max0.200.220.310.420.310.210.210.370.470.22
Min0.00170.00110.00060.00220.210.020.030.020.040.17
Mean0.0330.0450.0440.0720.230.060.070.100.140.21
SD0.0330.0350.0510.0690.0230.0540.0510.1030.1240.010
Mar.Max0.4610.2590.8631.0120.2730.4510.2210.7780.9260.360
Min0.00050.0010.0000.0010.1100.0280.0270.0180.0440.243
Mean0.0570.0300.0480.0850.2360.0940.0560.1170.1660.266
SD0.0830.0440.1540.1780.0320.1220.0570.2250.2590.023
Oct.Max0.3060.1290.4670.5730.2340.3040.1140.4260.5360.278
Min0.0010.0010.0010.0020.1820.0050.0020.0200.0230.221
Mean0.0550.0300.0550.0850.2220.0890.0340.1010.1400.235
SD0.0510.0240.0800.0970.0150.0810.0300.1130.1410.011
Table 11. Correlation matrix of error components computed from RMSEGCP and RMSECPs for each acquired dataset (Dec.—December; Jan.—January; Feb.—February; Mar.—March; Oct.—October; E—East coordinate; N—North coordinate; H—height coordinate; T—3D error; I—positioning error).
Table 11. Correlation matrix of error components computed from RMSEGCP and RMSECPs for each acquired dataset (Dec.—December; Jan.—January; Feb.—February; Mar.—March; Oct.—October; E—East coordinate; N—North coordinate; H—height coordinate; T—3D error; I—positioning error).
SurveyError ComponentsGCPsCPs
RMSEERMSENRMSEHRMSETRMSEIRMSEERMSENRMSEHRMSETRMSEI
Dec.RMSEE10.8130.8940.906−0.03310.9290.9180.9320.530
RMSEN0.81310.8840.8940.0720.92910.9520.9570.292
RMSEH0.8940.88410.999−0.2830.9180.95210.9990.227
RMSET0.9060.8940.9991−0.2580.9320.9570.99910.254
RMSEI−0.0330.072−0.283−0.25810.5300.2920.2270.2541
Jan.RMSEE10.880.820.85−0.0610.950.800.87−0.13
RMSEN0.8810.960.97−0.270.9510.890.95−0.02
RMSEH0.820.9611.00−0.330.800.8910.990.03
RMSET0.850.971.001−0.330.870.950.991−0.002
RMSEI−0.06−0.27−0.33−0.331−0.13−0.020.03−0.0021
Feb.RMSEE10.960.960.98−0.0810.970.910.950.31
RMSEN0.9610.991.000.070.9710.940.980.27
RMSEH0.960.9911.000.120.910.9410.990.27
RMSET0.981.001.0010.070.950.980.9910.28
RMSEI−0.080.070.120.0710.310.270.270.281
Mar.RMSEE1.000.940.940.98−0.641.000.970.990.99−0.34
RMSEN0.941.001.000.99−0.440.971.001.000.99−0.26
RMSEH0.941.001.000.99−0.450.991.001.001.00−0.27
RMSET0.980.990.991.00−0.530.990.991.001.00−0.29
RMSEI−0.64−0.44−0.45−0.531.00−0.34−0.26−0.27−0.291.00
Oct.RMSEE1.000.890.990.99−0.071.000.980.960.99−0.16
RMSEN0.891.000.870.90−0.080.981.000.970.99−0.19
RMSEH0.990.871.001.00−0.200.960.971.000.99−0.14
RMSET0.990.901.001.00−0.160.990.990.991.00−0.15
RMSEI−0.07−0.08−0.20−0.161.00−0.16−0.19−0.14−0.151.00
Table 12. Synthetic index (SI) as computed by Equation (4) for the December, January, February and October datasets.
Table 12. Synthetic index (SI) as computed by Equation (4) for the December, January, February and October datasets.
VariableDecemberJanuaryFebruaryOctober
SI1.421.851.081.79
Table 13. Correlation coefficient between RMSEj and SI computed for each dataset (GCPs = ground control points; CPs = check points; E = East coordinate; N = North coordinate; H = height coordinate; T = 3D error; I = positioning error in the image space).
Table 13. Correlation coefficient between RMSEj and SI computed for each dataset (GCPs = ground control points; CPs = check points; E = East coordinate; N = North coordinate; H = height coordinate; T = 3D error; I = positioning error in the image space).
GCPsCPs
RMSEERMSENRMSEHRMSETRMSEIRMSEERMSENRMSEHRMSETRMSEI
Pearson’s R0.5−0.8−0.10−0.12−0.110.28−0.80−0.50−0.490.67
Table 14. Coefficients of the calibrated predictive functions (E—East coordinate; N—North coordinate; H—height coordinate; T—total (3D) error; I—positioning error in the image space).
Table 14. Coefficients of the calibrated predictive functions (E—East coordinate; N—North coordinate; H—height coordinate; T—total (3D) error; I—positioning error in the image space).
RMSEE (M)– GCPSRMSEN (M)– GCPSRMSEN (M)– CPSRMSEH (M)– CPSRMSET (PIX)– CPS
a−0.00700.00400.0094−0.0166−0.0166
b0.0359−0.0256−0.05480.06960.0696
c0.00350.06510.11220.05350.0881
Table 15. Differences between RMSEj values as estimated by the calibrated predictive models and the correspondent ones from BBA for the March dataset.
Table 15. Differences between RMSEj values as estimated by the calibrated predictive models and the correspondent ones from BBA for the March dataset.
ErrorsMarch Dataset Difference (m)
RMSEE (GCPs)0.0024
RMSEN (GCPs)0.0047
RMSEN (CPs)0.0110
RMSEH (CPs)0.0039
RMSEI (CPs)0.0014

Share and Cite

MDPI and ACS Style

Capolupo, A.; Saponaro, M.; Borgogno Mondino, E.; Tarantino, E. Combining Interior Orientation Variables to Predict the Accuracy of Rpas–Sfm 3D Models. Remote Sens. 2020, 12, 2674. https://doi.org/10.3390/rs12172674

AMA Style

Capolupo A, Saponaro M, Borgogno Mondino E, Tarantino E. Combining Interior Orientation Variables to Predict the Accuracy of Rpas–Sfm 3D Models. Remote Sensing. 2020; 12(17):2674. https://doi.org/10.3390/rs12172674

Chicago/Turabian Style

Capolupo, Alessandra, Mirko Saponaro, Enrico Borgogno Mondino, and Eufemia Tarantino. 2020. "Combining Interior Orientation Variables to Predict the Accuracy of Rpas–Sfm 3D Models" Remote Sensing 12, no. 17: 2674. https://doi.org/10.3390/rs12172674

APA Style

Capolupo, A., Saponaro, M., Borgogno Mondino, E., & Tarantino, E. (2020). Combining Interior Orientation Variables to Predict the Accuracy of Rpas–Sfm 3D Models. Remote Sensing, 12(17), 2674. https://doi.org/10.3390/rs12172674

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop