Next Article in Journal
Mass Processing of Sentinel-1 Images for Maritime Surveillance
Previous Article in Journal
A Novel Building Type Classification Scheme Based on Integrated LiDAR and High-Resolution Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Accuracy Improvements in the Orientation of ALOS PRISM Images Using IOP Estimation and UCL Kepler Platform Model

by
Tiago L. Rodrigues
1,*,
Edson Mitishita
1,
Luiz Ferreira
1 and
Antonio M. G. Tommaselli
2
1
Department of Geomatics, Federal University of Paraná (UFPR), Curitiba 81531-990, Brazil
2
Department of Cartography, São Paulo State University (UNESP), Presidente Prudente 19060-900, Brazil
*
Author to whom correspondence should be addressed.
Remote Sens. 2017, 9(7), 634; https://doi.org/10.3390/rs9070634
Submission received: 10 March 2017 / Revised: 9 June 2017 / Accepted: 15 June 2017 / Published: 1 July 2017

Abstract

:
This paper presents a study that was conducted to determine the orientation of ALOS (Advanced Land Observing Satellite) PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) triplet images, considering the estimation of interior orientation parameters (IOP) of the cameras and using the collinearity equations with the UCL (University College of London) Kepler platform model, which was adapted to use coordinates referenced to the Terrestrial Reference System ITRF97. The results of the experiments showed that the accuracies of 3D coordinates calculated using 3D photogrammetric intersection increased when the IOP were also estimated. The vertical accuracy was significantly better than the horizontal accuracy. The usability of the estimated IOP was tested to perform the bundle block adjustments of another neighbouring PRISM image triplet. The results in terms of 3D photogrammetric intersection were satisfactory and were close to those obtained in the IOP estimation experiment.

1. Introduction

The images from sensors installed on orbital platforms have become important sources of spatial data. In this context, the images of the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor, onboard the Japanese ALOS (Advanced Land Observing Satellite) satellite, have the function of contributing to mapping, earth observation coverage, disaster monitoring and surveying natural resources [1]. An important feature of this sensor is that it was composed of three pushbroom cameras (backward, nadir and forward) that were designed to generate stereoscopic models.
An important requirement for extracting geo-spatial information from ALOS images is the knowledge of a set of parameters to connect the image space with an object space. Using a rigorous model, a set of parameters is composed of the exterior orientation parameters (EOP) and the interior orientation parameters (IOP). The EOP can be directly estimated by on-board GNSS receivers, star trackers and gyros. In contrast, the IOP are estimated from laboratory calibration processes before a satellite is launched.
However, after the launch and while the satellite is orbiting earth, there is a great possibility of changes in the values of the IOP. According to [2,3], three problems can contribute to changes in the original IOP values: accelerations; drastic environmental changes imposed during the launch of the satellite; and the thermal influence of the sun when the satellite is in orbit. Consequently, on-orbit geometric calibration has become an important procedure for extracting reliable geoinformation from orbital images. Examples of the on-orbit geometric calibration of the PRISM sensor can be seen in [4,5,6,7].
Another issue in the use of rigorous models is the platform model, which is responsible for modelling the changes in EOP during the generation of two-dimensional images. Over the years, several studies have proposed different platform models associated with the principle of collinearity [8,9,10]. Based on the platform model using second order polynomials, Michalis [11] associated the linear terms to the satellite velocity components and quadratic terms to the acceleration components. The platform model developed was called the UCL (University College of London) Kepler model, in which the accelerations are estimated from the two-body problem. An example of the UCL Kepler model used in the orientation of PRISM images with the subsequent generation of a Digital Surface Model (DSM) can be seen in [12].
In this paper, we present a study that was conducted to perform the orientation of a PRISM ALOS image triplet, considering the IOP estimation and using the collinearity model combined with the UCL platform model. Additionally, a methodology for the use of coordinates referenced to a Terrestrial Reference System (TRS) was developed, since the UCL Kepler platform model was developed to use coordinates referenced to the Geocentric Celestial Reference System (GCRS). The technique proposed in this work uses a different group of IOP and a different platform model from those used in [4,5,6,7]. The use of the UCL Kepler model, instead of models that use polynomials in the platform model, aims to reduce the number of unknowns and enables the use of position and velocity data extracted from the positioning sensors. In addition, in this article, significance and correlation tests between the IOPs were performed, and an IOP usability test was performed with a bundle adjustment with other neighbouring triplet. It should be noted that even though this satellite is out of operation, the acquired images remain in an archive for continued use. In addition, the methodologies investigated in this article can be applied to many other linear array pushbroom sensors with similar characteristics.
The following seven sections contain information about the space reference systems of the images used; a brief description about the PRISM sensor; the mathematical modelling used for the estimation of the IOP; the used platform model; and the test field and experiments. Finally, in the last two sections, the results obtained from the performed experiments are shown and discussed.

2. Image Space Reference Systems

The first reference system related to images is the Image Reference System (IRS). This system is associated with the two-dimensional array of image pixels in a column (C) and line (L) coordinate system. The system’s origin is at the centre of the top left image pixel, as shown in Figure 1a. Another system related to the linear array CCD chip is the Scanline Reference System (SRS), which is a two-dimensional system of coordinates. The origin of the system is located at the geometric centre of the linear array CCD chip. The xs axis is parallel to the L axis and is closest to the flight direction. The ys axis is perpendicular to the xs axis, as shown in Figure 1b.
The mathematical transformation between IRS coordinates and SRS coordinates is presented in Equations (1) and (2).
x s = [ L i n t ( L ) ] P S ,
y s = P S C ( n C 1 ) 2 P S = ( C ( n C 1 ) 2 ) P S ,
where PS is the pixel size of the CCD chip in mm and nC is the number of columns of the image.
The Camera Reference System (CRS) is a three-dimensional system. Its origin is at the perspective centre (PC) of each camera lens. The xc and yc axes are parallel to the xs and ys axes. The zc axis points upward and completes a right-hand coordinate system [11]. The projection of the PC in the focal plane defines the principal point (PP), and the focal length f is the distance between the PC and PP. An illustration of the CRS and SRS in a focal plane with three CCD chips is shown in Figure 2.
The transformation between SRS coordinates and CRS coordinates is presented in Equation (3).
[ x c y c z c ] = [ x s y s 0 ] + [ d x d y f ] ,
where dx and dy are translations from the centre of a CCD chip to the PP in the focal plane.

3. PRISM Sensor

The PRISM sensor provides images with a ground sample distance (GSD) of 2.5 m, and a radiometric depth of 8 bits in a spectral range from 0.52 to 0.77 μm. The sensor is composed of three independent cameras in the forward, nadir and backward along-track directions, as shown in Figure 3a. The simultaneous imaging by the three cameras is called the triplet mode and covers a range of 35 × 35 km on the ground, as shown in Figure 3b. The forward and backward viewings were oriented to ±23.8 degrees with respect to nadir viewing to obtain a base/height ratio equal to one [1].
In the focal plane of the backward and forward cameras, eight CCD chips, each with 4928 columns, were placed. In the focal plane of the nadir camera, six CCD chips, each with 4992 columns, were placed. The CCD chips overlap by 32 columns. In Figure 4, the arrangement of the CCD chips in the focal planes of the forward, nadir and backward cameras are presented.
For the arrangement of the final image distributed to users, with 14,496 columns and 16,000 lines, a set of four CCD chips was selected. Further pixels were not used at the left and right sides. In Figure 5, an example of the nadir image is shown. An on-orbit geometric calibration, performed by JAXA in June 2007, computed the following parameters: translation values of each CCD chip centre with respect to PP, the focal lengths, the sizes of pixels in the CCD chips, and the PP coordinates x0 and y0. These parameters were provided to some researchers, as can be seen in [6,7]. This dataset is currently embedded in BARISTA software, developed by project 2.1 of the Cooperative Research Centre for Spatial Information [7]. The calibrated values of the focal lengths of the backward, nadir and forward cameras are shown in Table 1.
An example of the input step of the PRISM sub-image files and extraction of the mentioned data set are shown in Figure 6 and Figure 7, respectively.
ALOS PRISM images are made available to users at four different processing levels. The processing levels and characteristics of each level are shown in Table 2 [1]. For the application of rigorous models, processing levels 1A or 1B1 must be used.

4. Mathematical Modelling

The accuracy of orientation using the rigorous model is directly related to the accuracy of the interior orientation. The IOP can be estimated by calibration in the laboratory before the satellite launches. However, the physical conditions in this case are not the same as those found when the satellites are in orbit. According to [2], during the launch of a satellite, the environmental conditions change rapidly and drastically, causing changes to the internal sensor geometry. According to the authors, the environmental conditions after orbit stabilization are also harsh, and may cause internal geometry changes; however, this is not as crucial as the changes that occur during launch. According to [3], the large acceleration during a launch may change the exact position of the CCD-lines in the camera and the relation between the CCD-lines. Consequently, at least their geometric linearity must be verified after launch. In addition, according to the same author, the systematic lens distortion can be calibrated before launch but may be influenced by the launch. Furthermore, the thermal influence of the sun can cause changes in the internal sensor geometry. Consequently, to exploit the full geometric potential of a sensor, the IOP should be re-estimated after a satellite launches, preferably periodically. For the PRISM ALOS cameras, JAXA conducted a re-estimation of the parameters multiple times after launch, such as the June 2007 calibration mentioned above.
Considering the IOP of an orbital CCD linear array sensor in addition to the focal length (f) and sizes of the pixels in the CCD chip (PS), Poli [13] proposed two parameter sets. The first set of parameters is the IOP related to the optical system. These parameters are the PP coordinates x0 and y0, the change in the focal length Δf, the coefficients K1 and K2 of the symmetric radial lens distortion, and the scale variations sx and sy in the xs and ys directions, respectively. The scale variation effect is only considered in the ys direction when the CCD array is equal to a linear array. Examples of the effects of IOP related to the optical system are shown in Figure 8.
The correctives terms dxf and dyf for the effects of change in the focal length in the xs and ys directions and symmetric radial distortion terms dxr and dyr are given by:
d x f = ( x c x 0 ) f Δ f ,
d y f = ( y c y 0 ) f Δ f ,
d x r = K 1 r 2 ( x c x 0 ) + K 2 r 4 ( x c x 0 ) ,
d y r = K 1 r 2 ( y c y 0 ) + K 2 r 4 ( y c y 0 ) ,
with:
r = ( x c x 0 ) 2 + ( y c y 0 ) 2 .
The correction to the scale variation in the ys direction is:
d y s = s y y s .
The second set of IOP is related to the CCD chip. These parameters are the change of pixel dimensions in the xs and ys directions, dpx and dpy; the two-dimensional shift a0 and b0; the rotation θ of the CCD chip in the focal plane with respect to its nominal position; and the central angle of the effect of a line bending in the focal plane, considering that the straight CCD line is deformed into an arc. As in the case of the scale variation effect, for a linear array sensor, the parameter dpx can be disregarded. As seen in [6], the CCD chip displacements dx and dy, with respect to the PP, can also be considered IOP related to the CCD chip. Examples of effects of the IOP related to the CCD chips are shown in Figure 9.
The two-dimensional shift parameters a0, b0 are added to the dx and dy quantities.
[ x c y c ] = [ x s y s ] + [ d x + a 0 d y + b 0 ] .
The effect of the change of pixel dimension in the ys direction (dpy), CCD chip rotation in the xs and ys directions (dxθ, dyθ) and line bending in the xs (dxδ) direction are given by:
d p y = y s Δ p y P S ,
d x θ = y s s i n   θ ,
d y θ = y s ( 1 c o s   θ ) ,
d x δ = y s r s 2 δ ,
where
r s 2 = x s 2 + y s 2 .
In this research, all of the mentioned IOP were initially considered, except the dpy parameter, because its effect is similar to the effect caused by the scale variation in the ys direction (Figure 8 and Figure 9). The sy parameters were considered different for each CCD chip to get closer to the physical reality and avoid strong correlations with the change in the focal length, unlike that considered in [6]. The central angle δ of the bending effect was also considered unique for each CCD chip.
Since the corrective terms d and dys of rotation and scale variation in the ys direction are functions of their own coordinate ys, they were grouped into a single correction term (a1 and b1):
d y θ + d y p = y s ( 1 c o s   θ ) y s s y = y s ( 1 c o s   θ s y ) ,
d y θ + d y p = a 1 y s ,
where
a 1 = ( 1 c o s   θ s y ) .
The term of rotations in the xs direction was considered:
d x θ = y s s e n   θ = b 1 y s .
The IOP dx, dy, x0 and y0 were fixed with their nominal values since the residual errors are absorbed by the parameters a0 and b0. The nominal values of the focal lengths used in BARISTA software were also fixed since their uncertainties were estimated from the Δf parameters. The IOP a0, b0 and a1, b1 were estimated using weighted constrains with a standard error of 1.5 pixels and 300 ppm, respectively, and the latter was as adopted in BARISTA software [7]. To define the reference system and avoid singularities, as adopted in [7], for each of the three PRISM cameras, one CCD was selected as the master chip. In this research, the second CCD chip was considered the master CCD chip. As a result, the parameters a0, b0 and a1, b1 for this CCD chip were fixed, and its systematic errors were absorbed by the EOP. The mathematical transformation of SRS coordinates to CRS coordinates, considering the set of IOP, is given by the following equations:
x c = x s + d x + a 0 x 0 + a 1 y s + d x r + d x f + d x δ ,
y c = y s + d y + b 0 y 0 + b 1 y s + d y r + d y f ,
The collinearity equations with the IOP are:
x s = f Δ X Δ Z d x a 0 + x 0 a 1 y s d x r d x f d x δ ,
y s = f Δ Y Δ Z d y b 0 + y 0 b 1 y s d y r d y f ,
with:
Δ X = r 11 ( t ) [ X   X S ( t ) ] +   r 12 ( t ) [ Y   Y S ( t ) ] +   r 13 ( t ) [ Z   Z S ( t ) ] ,
Δ Y = r 21 ( t ) [ X   X S ( t ) ] +   r 22 ( t ) [ Y   Y S ( t ) ] +   r 23 ( t ) [ Z   Z S ( t ) ] ,
Δ Z = r 31 ( t ) [ X   X S ( t ) ] +   r 32 ( t ) [ Y   Y S ( t ) ] +     r 33 ( t ) [ Z   Z S ( t ) ] ,
where X, Y, Z and XS (t), YS (t), ZS (t) are, respectively, the object space coordinates of a point and the PC sensor at an instant t; and r11 (t), ..., r33 (t) are the elements of the rotation matrix R (t), which is responsible for aligning the CRS with the TRS at a given instant t. Since the satellite attitude data were not available for the images used in this research, the rigorous model considered in this study was a Position-Rotation type, as described in [14]. Thus, the rotation matrix R (t) was defined as a function of the nonphysical attitude angles ω, φ, and κ, which vary in time:
R ( t ) = R Z ( κ ( t ) ) R Y ( φ ( t ) ) R X ( ω ( t ) ) .
After considering all of the mentioned IOP, significance and correlation analyses were performed between the parameters to find an optimal group of IOPs that could be used. The IOP significance test was based on a comparison of the estimated parameter value with its standard deviation, obtained from the variance-covariance matrix. When the IOP value was lower than the value of its standard deviation, it was considered non-significant and, consequently, was without importance in the final mathematical model. This test was performed iteratively, analysing one IOP at a time. The correlation analysis between IOP aimed to identify dependencies and the loss of physical meaning in the value of the parameter. In this research, the correlation was considered strong when the value of the correlation coefficient was higher than 0.75 or 75%.

5. Platform Model

Based on the second-order polynomial platform model, Michalis [11] indicated that the first-order coefficient represents the velocity of the satellite on the reference axis and, similarly, the second-order coefficient represents the acceleration on the same axis. This platform model is called the UCL (University College of London) Kepler model. The components of the acceleration are calculated from the components of the position, using the mathematical formulation of the two-body problem [15]. The components of the sensor PC position on the satellite are calculated using the theory of Uniformly Accelerated Motion, as shown in Equations (28)–(30).
X s ( t ) = X 0 + u x t G M X 0 t 2 2 ( X 0 2 + Y 0 2 + Z 0 2 ) 3 / 2 ,
Y s ( t ) = Y 0 + u y t G M Y 0 t 2 2 ( X 0 2 + Y 0 2 + Z 0 2 ) 3 / 2 ,
Z s ( t ) = Z 0 + u z t G M Z 0 t 2 2 ( X 0 2 + Y 0 2 + Z 0 2 ) 3 / 2 ,
where X0, Y0, Z0 and ux, uy, uz are, respectively, the position and velocity components of the sensor PC at the time at which the first line of the image was obtained; t is the acquisition time of an image line; and GM is the standard gravitational parameter.
The rotation angles of the sensor may be considered constant during the image acquisition time [16] or can be propagated by polynomials [8,9,10,12], depending on the satellite’s motion characteristics. In this research, the rotation angles ω and φ were considered invariant since the image acquisition occurred in approximately 6 s and the attitude control of the ALOS has sufficient accuracy [17]. However, the κ angle was considered variable, being modelled by a second-order polynomial, as shown in Equations (31)–(33). This is due to the yaw angle steering operation, which is intended to continuously modify and correct the satellite yaw attitude according to the orbit latitude argument to compensate for the effects of the Earth’s rotation on the sensor image data [17] (crab movement).
ω = ω 0 ,
φ = φ 0 ,
κ = κ 0 + d 1 t + d 2 t 2 ,
where ω0, φ0, and κ0 are the orientation angles in the first line of each image; and d1, d2 are the polynomial coefficients of κ0 variation.
An important issue in the use of this platform model is that the coordinates in the object space of control, or check points, must be referenced to the GCRS due to the mathematical formulation of the two-body problem in the calculation of acceleration. Normally, the coordinates of the points collected in the object space are referred to a TRS. To avoid the transformation of coordinates from TRS to GCRS, the platform model composed of Equations (28)–(30) was adapted. In this transformation, the movements of precession and nutation, and polar motion must be taken into consideration in addition to the Earth’s rotation [15].
However, according to [18], in a short period of orbit propagation, the effects of precession and nutation and polar motion can be disregarded. Once the PRISM images with 16,000 lines are formed in approximately 6 s, only the influence of the Earth’s rotational movement was added to the equations of the two-body problem. Thus, the UCL platform model adapted for the use of coordinates referenced to a TRS is defined as follows:
X s ( t ) = X 0 + u x t + 1 2 [ G M X 0 r 3 + ω t 2 X 0 + 2 ω t u y ] t 2 ,
Y s ( t ) = Y 0 + u y t + 1 2 [ G M Y 0 r 3 + ω t 2 Y 0 + 2 ω t u x ] t 2 ,
Z s ( t ) = Z 0 + u z t G M Z 0 2 r 3 t 2 ,
with:
r = X 0 2 + Y 0 2 + Z 0 2 ,
where ωt is the magnitude of the Earth’s rotational angular velocity.
The advantage of using the UCL Kepler model instead of models that use polynomials is the reduction in the number of EOP unknowns in each image triplet. For the ALOS satellite, the components of position and velocity from GPS XGPS, YGPS, ZGPS, uxGPS, uyGPS, and uzGPS are available in the .SUP files, ancillary 8, with a sampling interval of 60 s for the day on which the image was obtained [1]. The data are provided by the GPS receiver that is on board the satellite. To estimate the state vectors referring to the instant of acquisition of the first lines of each image, a spline interpolation was used. In this process, the minimum accuracy of the spline interpolation, omitting a central point and using 34 surrounding points, was 6 mm for positions and 7 µm/s for velocities. These interpolation results are in agreement with those obtained by [19,20], who used the Hermite interpolator. However, the components of the position and velocity from GPS have uncertainties XT, YT, ZT, uxT, uyT, and uzT when accurate orbit determination is not applied by JAXA. Thus, the components X0, Y0, Z0, ux, uy and uz were obtained by:
X 0 = X G P S + X T ,
Y 0 = Y G P S + Y T ,
Z 0 = Z G P S + Z T ,
u x   = u x G P S + u x T ,
u y   = u y G P S + u y T ,
u z   = u z G P S + u z T .
The values of XGPS, YGPS, ZGPS, uxGPS, uyGPS, and uzGPS are fixed in the estimation process by least squares adjustment. To avoid strong correlations with the parameters related to the systematic changes of CCD chips positions in the focal planes, the parameters XT, YT, ZT, uxT, uyT, and uzT received weighted constraints in the least squares adjustment. As mentioned above, the satellite attitude data were not available for the images used in this research. Therefore, the EOP ω0, φ0, κ0, d1 and d2 did not receive weighted constraints. To identify the strong dependencies between EOP and IOP and avoid inaccurate results or the loss of the physical meaning of parameters, correlation analyses were performed using values obtained from the covariance matrix. Similarly, the correlation was considered strong when the value of the correlation coefficient was higher than 0.75 or 75%.
Since the state vectors are referenced to ITRF97, the value of ωt was 7,292,115 × 10−11 rad/s. The value of the GM used was the one indicated in the SUP file, which was 3.986004415000000 × 1014 m3 s−2.

6. Test Field and Experiments

In this research, were used two neighbouring PRISM triplets, at processing level 1B1, obtained on the same path, both in 20 November 2008. The test field covered by the images included the city of Presidente Prudente and adjacent regions in the state of São Paulo, Brazil. Figure 10a shows the location and the areas covered by the triplets used in this study, which were called triplets n.1 and n.2. In Figure 10b,c, the location of triplets within the state of São Paulo and the location of the state of São Paulo in Brazil are shown, respectively.
In each triplet, forty tie points were collected and distributed homogeneously. An example is shown in Figure 11, with the distribution of these points in triplet n.1. All points were manually collected in the three images of each triplet; i.e., all points have three image measurements.
The coordinates of the ground control and check points were extracted from orthophotos with 1 m GSD and from Digital Terrain Models (DTM) with 5 m GSD generated from aerial images. The positional accuracy of the orthophotos and the altimetric information extracted from the DTM are, respectively, compatible with the 1:2000 and 1:5000 scales, and class A of the Brazilian Planimetric Cartographic Accuracy Standard. This means that 90% of the points present errors less than 1 m and 2.5 m. In the area covered by triplet n.1, 22 ground control points and 20 check points were collected. In the area covered by triplet n.2, 21 ground control points and 24 check points were collected. In Figure 12a,b, the distributions of the control and check points in the area covered by triplets n.1 and n.2 are shown, respectively.
In Table 3, the numbers of ground control points and check points on each CCD chip for the three cameras of triplets n.1 and n.2. are shown.
To facilitate the measurement of ground control and check points on the images and orthophotos, the determination of centroids of geometric entities was utilized, as presented in [21]. The geometric entities used were buildings, soccer fields, courts, or anthropic structures of other types. As an example, in Figure 13a, the coordinates of point 16 in the space image, that is, in the PRISM images, were obtained from the coordinates of points 16_1, 16_2, 16_3 and 16_4. In Figure 13b, point 16 obtained in the object space, that is, in the orthophotos, in the same way is shown. After estimating the planimetric coordinates of the centroids points, the altimetric coordinates were extracted from the DTM of the region.
In this research, four experiments were carried out using triplets n.1 and n.2. In experiments 1 and 2, the bundle adjustment trials of triplet n.1 with and without the estimation of IOP were performed. To analyse the quality of the obtained IOP from experiment 2, in experiment 4 they were used to perform the bundle adjustment of triplet n.2. In experiment 3, the bundle adjustment of triplet n.2 was performed without the use of the IOP estimated in experiment 2. The results obtained from experiments 3 and 4 are compared and discussed in the next section. In Table 4, the configuration of each experiment is shown.
In experiment 2, in addition to the IOP correlation analysis, the correlations between IOP and EOP were estimated. The procedures for the analyses were previously mentioned in Section 4 and Section 5. To better demonstrate the steps of the proposed methodology in experiment 2, a workflow is shown in Figure 14.

7. Results and Discussion

7.1. Obtained Results from the Experiments of Triplet n.1

As mentioned, two experiments were performed with triplet n.1. The first one was the conventional bundle adjustment of the triplet. In this experiment, the image’s EOP and the 3D coordinates of control, tie and check points were computed. In the second one, IOP were also estimated. As mentioned in Section 4 and indicated in Figure 14, the analysis of IOP significance was performed iteratively, analysing one IOP each run. After fourteen iterations, a reduced group of significant IOP was defined. The values of estimated significant IOP and their respective standard deviations are shown in Table 5.
As can be seen in Table 5, for the three cameras, the parameters K1 and K2 and all parameters a1 and b1 for all CCD chips were not significant. This occurred because the values of their standard deviations were greater than their own values. Consequently, they were not considered in the bundle adjustment, and there was no change in planimetric and altimetric accuracy. With the exception of the IOP δ_chip2 in the nadir camera, all of the δ parameters of the CCD bending effect were insignificant.
To investigate the reduction of the functional mathematical models, the correlations between the IOP were analysed. In Table 6, the values of correlation coefficients are shown.
As can be verified in Table 6, in the backward camera, there was one case of strong correlation between the IOP a0_chip1 and a0_chip4. In the nadir camera, there were two cases of strong correlation between b0_chip3 e b0_chip4 and between δ_chip2 e a0_chip1. In the forward camera, all IOP related to systematic changes in CCD chips positions in the focal plane were strongly correlated to each other. After analysing the reduced model, it was concluded that only the IOP δ_chip2 could be ignored without significant impacts in the planialtimetric accuracy of the bundle adjustment. After this simplification, the values of the IOP and their precisions, as well as the correlation values, were not changed. The strong correlations between some IOP, as previously mentioned, can indicate a possible loss of physical meaning of the parameters when estimating them by this technique with bundle adjustment.
Check points were used to verify the obtained accuracies in the performed experiments. The 3D coordinates of check points, as extracted from the orthophotos and DTM, were compared to the 3D coordinates estimated with bundle adjustment. The values of the mean and root mean square errors (RMSE) of discrepancies were calculated. All discrepancies were calculated in the Local Geodetic System (LGS). In Table 7, these values from experiments 1 and 2 are shown.
To analyse the planimetric and altimetric discrepancies in experiments 1 and 2, they were plotted and are presented in Figure 15 and Figure 16. The expected planimetric accuracy of 1 GSD was adopted as a reference, and this value is indicated with a circle in the graph.
As can be seen in Figure 15a,b, the planimetric accuracies obtained from experiment 2, were significantly improved. The number of planimetric discrepancies less than 2.5 m was three times greater than that obtained from experiment 1, although a small tendency was found in the Y discrepancies. Comparing the altimetric discrepancies in Figure 16a,b, the discrepancies from experiment 2 had a better distribution than those from experiment 1, although with a small tendency towards negative values.
To facilitate the graphical analyses and to verify the normality of the discrepancies samples in the components, the Shapiro-Wilk test was performed in two experiments, considering the expected planimetric accuracy of one GSD. In experiment 1, the null hypothesis of normality in the discrepancies samples was not rejected in the XL and ZL components at the confidence level of 95%. However, in experiment 2, the null hypothesis of normality was not rejected in any of the three components at the same confidence level.
Considering the RMSE values of the 3D discrepancies from the experiments 1 and 2, it can be seen in Table 7 that the estimation of IOP improves the accuracies by 0.24 m, 0.66 m, and 1.36 m in the XL, YL and ZL components, respectively. The improvement in planimetric accuracy was 0.65 m, as can be seen in the graph in Figure 17.
The strong dependencies between IOP and EOP were also analysed. As shown in Table 8, only in the backward camera were the IOP a0_chip1 and a0_chip4 strongly correlated with the EOP κ0.
These strong correlations between IOP and EOP show a probable dependency and, thus, a lack of physical meaning for the a0_chip1 and a0_chip4 parameters. The results showed that the IOP computed in experiment 2 are feasible, only to be used in triplet n.1. Even getting this result, this set of IOP was used to perform the bundle adjustment of triplet n.2.

7.2. Obtained Results from the Experiments of Triplet n.2

Using triplet n.2, two experiments were performed. In the first one, that is, in experiment 3, the bundle adjustment of triplet n.2 was performed without using the IOP estimated in experiment 2. In the second, that is, in experiment 4, the IOP estimated in experiment 2 were used to perform the bundle adjustment of the triplet n.2. The objective of this experiment was to investigate the usability of the IOP estimated with a triplet in the bundle adjustment of another triplet.
As performed in the previous experiments, the 3D discrepancies of check points were computed. The mean and RMSE values of the 3D discrepancies were also calculated. In Table 9, the obtained values for experiments 3 and 4 are shown.
As shown in Figure 18a,b, the planimetric discrepancies of experiment 4 have fewer trends than do those of experiment 3. In experiment 3, most of the discrepancies are in the South quadrant. Additionally, in experiment 4, the number of planimetric discrepancies less than 2.5 m is three times more than that in experiment 3, showing better planimetric accuracy when the IOP from experiment 2 were used, despite the IOP a0_chip1 and a0_chip4 of the backward camera being correlated with the EOP κ0.
As shown in Figure 19a,b, the altimetric discrepancies from experiment 4 are distributed without trends. In contrast, the altimetric discrepancies from experiment 3 have a strong trend towards positive values. Similar to planimetric accuracy, the altimetric accuracy was also improved when the IOP from experiment 2 were used.
Using the Shapiro-Wilk tests to check the normality of the 3D discrepancies samples, the null hypothesis of normality was not rejected for any components of XL, YL and ZL at the 95% confidence level. As shown in Table 7, the bundle adjustment using the IOP estimated in experiment 2 showed improvements in accuracies. The improvements were approximately 1.44 m and 1.75 m in the XL and ZL components, respectively. In the YL component, the accuracies were nearly equal, with a difference of 3 cm. In Figure 20, it can be observed that the planimetric accuracy was improved when the estimated IOP from experiment 2 were used in the bundle adjustment procedure.

8. Discussion

In light of the results obtained in the experiments, a synthesis can be given. Regarding the accuracy, in the bundle adjustment with IOP estimation, improvements were observed in comparison to the bundle adjustment without IOP estimation, mainly in the altimetric component. While the resulting planimetric accuracy improved by 0.65 m, the altimetric accuracy improved by 1.36 m. However, it is important to highlight that the measurements in the image space of the control and check points used in the bundle adjustment were refined by using the centroids methodology, presenting a quality better than 1 pixel. It is also worth mentioning that the magnitudes of the planimetric accuracies found in this research are close to those found in [6,7], which used different platform models and different IOP sets from those used here. In contrast, the magnitudes of the altimetric RMSE values found in this research were larger, that is, less accurate than those found in the cited studies.
Two questions that were not addressed in the works carried out by [6,7] are the analysis of significance of IOP and the correlation between them. This is done with the objective of analysing the importance of the parameter, with a possible loss of physical meaning and a possible simplification of the functional mathematical model. In the significance analysis, it was observed that the effects of the symmetrical radial distortions of the lens systems, of the rotations of the CCD chips in the focal plane and of the scale variation in the ys direction were not significant. Consequently, these parameters could be ignored. For the effect of bending of the CCD chip, the parameter was only significant in chip 2 of the nadir camera. The effect of systematic change in focal length was significant in all cameras. The parameters related to systematic changes CCD chip positions in the focal planes did not present a homogeneous behaviour. After analysing the correlation between the significant IOP, it was verified that only the IOP δ_chip2 could be ignored without a significant change in planialtimetric and altimetric accuracies.
An issue that was not analysed in [6,7] was the application of the IOP estimated with one triplet in the bundle adjustment of a different triplet. After the correlation analyses between the IOP and EOP, strong correlations were detected between the IOP a0_chip1 and a0_chip4 of the backward camera and the EOP κ0, indicating a possible loss of physical significance of the IOP. Even so, when IOP were applied to the bundle adjustment of triplet n.2, there were planimetric and altimetric improvements compared to bundle adjustment without the use of IOP.
It should be noted that even though this satellite is out of operation, the images obtained by it remain in archives and are still being used in several applications. In addition, the methodologies investigated in this article can be applied to other linear array pushbroom sensors with similar characteristics. As an example, we can cite the images obtained by the ZY-3 Chinese satellite.

9. Conclusions and Recommendations

This study investigated the importance of estimating IOP to compute 3D coordinates by photogrammetric intersection with bundle adjustment using a PRISM image triplet. Additionally, the study proposed some improvements in the platform model developed by Michalis [9] to use coordinates referenced to the TRS ITRF97. The bundle adjustment with IOP estimation improved the quality of 3D photogrammetric intersection. Considering the RMSE of the 3D discrepancies in check points, the planimetric and altimetric accuracies increased 0.65 and 1.36 m, respectively. After the bundle adjustment with IOP estimation, a trend in the southern area of the experimental set was observed. However, the sample of discrepancies in Y component was considered normal by the Shapiro-Wilk test considering the expected planimetric accuracy of one GSD.
Ground control points’ and check points’ coordinates were obtained from the determination of centroids points of geometric entities, such as buildings, soccer fields and courts. This procedure provides an improvement in the precision of measurements of points, both in the image space and the object space. This can be confirmed since, in all experiments, more than 97% of the residual values of the observations after bundle adjustment were lower than 1 pixel.
In the bundle adjustment with IOP estimation, only part of the IOP related to systematic changes in CCD chip positions in the focal planes and the IOP related to changes in focal lengths was considered significant. The effects of the rotation and bending of CCD chips, the symmetric radial distortion of the lens and the scale variation in the ys direction could be discarded without affecting the accuracy results.
The practical analysis of the usability of the estimated IOP from an independent triplet to be used in other applications was performed. The IOP estimated from triplet n.1 were applied to the bundle adjustment of triplet n.2. Despite the strong correlations between the IOP a0_chip1 and a0_chip4, and the EOP κ0 in the backward camera, the result of using these IOP was satisfactory. The improvements in the planimetric and altimetric accuracies were 0.40 m and 1.75 m, respectively.
For future studies to be developed, it is recommended to verify the estimation of the IOP of the PRISM sensor by using the polynomial model presented in [7] in conjunction with the adapted UCL platform model; to estimate the IOP of other sensors using the methodology used in this study; and to analyse the use of collinearity and coplanarity rigorous model with ground control points and straight lines.

Acknowledgments

The authors thank the company Topocart for providing the orthophotos and DTM used for the extraction of control and check points and to Unesp, Department of Cartography, for the use of the PRISM images.

Author Contributions

Tiago Rodrigues and Edson Mitishita conceived and designed the study; Tiago Rodrigues performed the data processing and analysis, and wrote the manuscript. Edson Mitishita, Luiz Danilo and Antonio Tommaselli contributed to results interpretation and manuscript writing.

Conflicts of Interest

The authors declare no conflict of interest.

List of Mathematical Symbols Used for the Interior Orientation Parameters

The following mathematical symbols are used for the interior orientation parameters in this manuscript:
n c Number of columns in a CCD chip
P S Pixel dimension in the CCD chip
f Focal length
x 0 , y 0 PP coordinates
K 1 , K 2 , K 3 Coefficients of the symmetric radial lens distortion
Δ f Change in the focal length
s x , s y Optical system scale variations in the xs and ys directions
d p x , d p y Change of pixel dimensions in the xs and ys directions
d x , d y Two-dimensional CCD chip displacements with respect to the PP
θ Rotation of the CCD chip in the focal plane with respect to its nominal position
a 0 _ c h i p n , b 0 _ c h i p n Change of the CCD chip positions, for a chip n, in the xs and ys directions
a 1 _ c h i p n Parameter resulting from the grouping of the CCD chip rotation and scale variation effects, for a chip n, in the ys direction
b 1 _ c h i p n Parameter resulting from the correction of the CCD chip rotation effect, for a chip n, in the xs direction
δ _ c h i p n Central angle of the line bending effect in the focal plane, for a chip n

References

  1. ALOS User Handbook. Available online: http://www.eorc.jaxa.jp/ALOS/en/doc/alos_userhb_en.pdf (accessed on 15 December 2012).
  2. Baltsavias, E.P.; Zhang, L.; Eisenbeiss, H. DSM generation and interior orientation determination of IKONOS images using a Testfield in Switzerland. Photogramm. Fernerkund. Geoinf. 2006, 1, 41–54. [Google Scholar]
  3. Jacobsen, K. Geometry of satellite images—Calibration and mathematical models. In Proceedings of the Korean Society of Remote Sensing—ISPRS International Conference, Jeju, Korea, 12 October 2005; pp. 182–185. [Google Scholar]
  4. Tadono, T.; Shimada, M.; Iwata, T.; Takaku, J. Results of Calibration and Validation of ALOS Optical Sensors, and Their Accuracy Assessments. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Barcelona, Spain, 23 July 2007; pp. 3602–3605. [Google Scholar]
  5. Tadono, T.; Iwata, T.; Shimada, M.; Takaku, J. Updated results of calibration and validation of PRISM onboard ALOS. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Honolulu, HI, USA, 25 July 2010; pp. 36–38. [Google Scholar]
  6. Kocaman, S.A. Sensor Modeling and Validation for Linear Array Aerial and Satellite Imagery. Ph.D. Thesis, Institute of Geodesy and Photogrammetry, Eidgenössische Technische Hochschule Zürich, Zürich, Swiss, 2008. [Google Scholar]
  7. Weser, T.; Rottensteiner, F.; Willneff, J.; Poon, J.; Fraser, C.S. Development and testing of a generic sensor model for pushbroom satellite imagery. Photogramm. Rec. 2008, 23, 255–274. [Google Scholar] [CrossRef]
  8. Rodrigues, T.L.; Ferreira, L.D.D. Aplicação do movimento kepleriano na orientação de imagens HRC—CBERS 2B. Bol. Ciênc. Geod. 2013, 19, 114–134. [Google Scholar] [CrossRef]
  9. Tommaselli, A.M.; Marcato Junior, J. Bundle block adjustment of CBERS-2B HRC imagery combining control points and lines. Photogramm. Fernerkund. Geoinf. 2012, 2012, 129–139. [Google Scholar] [CrossRef]
  10. Marcato Junior, J.; Tommaselli, A.M.G. Exterior orientation of CBERS-2B imagery using multi-feature control and orbital data. ISPRS J. Photogramm. Remote Sens. 2013, 79, 219–225. [Google Scholar] [CrossRef]
  11. Michalis, P. Generic Rigorous Model for along Track Stereo Satellite Sensors. Ph.D. Thesis, University College London, London, UK, 2005. [Google Scholar]
  12. Dowman, I.; Michalis, P.; Li, L. Analysis of Urban Landscape Using Multi Sensor Data. In Proceedings of the 4th ALOS PI Symposium, Tokyo, Japan, 11 November 2010. [Google Scholar]
  13. Poli, D. Modelling of Spaceborne Linear Array Sensors. Ph.D. Thesis, Institute of Geodesy and Photogrammetry, Eidgenössische Technische Hochschule Zürich, Zürich, Swiss, 2005. [Google Scholar]
  14. Kim, T.; Dowman, I. Comparison of two physical sensor models for satellite images: Position-Rotation model and Orbit-Attitude model. Photogramm. Rec. 2006, 21, 110–123. [Google Scholar] [CrossRef]
  15. Seeber, G. Satellite Geodesy, 2nd ed.; Walter de Gruyter: Berlin, Germany, 2003. [Google Scholar]
  16. Michalis, P.; Dowman, I. A Generic Model for along Track Stereo Sensors Using Rigorous Orbit Mechanics. Photogramm. Eng. Remote Sens. 2008, 74, 303–309. [Google Scholar] [CrossRef]
  17. Satoru, W.; Akihiro, H. Development of the Earth Observation Satellite “DAICHI” (ALOS). NEC Tech. J. 2011, 6, 62–66. [Google Scholar]
  18. Leick, A. Satellite Surveying, 3rd ed.; John Wiley: Hoboken, NJ, USA, 2004. [Google Scholar]
  19. Sandwell, D.T.; Myer, D.; Mellors, R.; Shimada, M.; Brooks, B.; Foster, J. Accuracy and Resolution of ALOS Interferometry: Vector Deformation Maps of the Father’s Day Intrusion at Kilauea. IEEE Trans. Geosci. Remote Sens. 2008, 46, 3524–3534. [Google Scholar] [CrossRef]
  20. Schneider, M.; Lehner, M.; Müller, R.; Reinartz, P. Stereo Evaluation of ALOS/PRISM Data on ESA-AO Test Sites—First DLR Results. In Proceedings of the XXI ISPRS Congress—Technical Commission I, Beijing, China, 3–11 July 2008; pp. 739–744. [Google Scholar]
  21. Mitishita, E.A.; Habib, A.; Centeno, J.; Machado, A.; Lay, J.; Wong, C. Photogrammetric and lidar data integration using the centroid of a rectangular building roof as a control point. Photogramm. Rec. 2008, 23, 19–35. [Google Scholar] [CrossRef]
Figure 1. Image Reference System (a) and Scanline Reference System (b).
Figure 1. Image Reference System (a) and Scanline Reference System (b).
Remotesensing 09 00634 g001
Figure 2. Scanline and camera reference systems.
Figure 2. Scanline and camera reference systems.
Remotesensing 09 00634 g002
Figure 3. PRISM sensor cameras (a) and PRISM triplet observation mode (b) [1].
Figure 3. PRISM sensor cameras (a) and PRISM triplet observation mode (b) [1].
Remotesensing 09 00634 g003
Figure 4. Focal plane arrangements of the ALOS/PRISM forward camera (a), nadir camera (b) and backward camera (c) [6].
Figure 4. Focal plane arrangements of the ALOS/PRISM forward camera (a), nadir camera (b) and backward camera (c) [6].
Remotesensing 09 00634 g004
Figure 5. Example of image formation for the PRISM nadir camera [6].
Figure 5. Example of image formation for the PRISM nadir camera [6].
Remotesensing 09 00634 g005
Figure 6. Example of image input for the PRISM backward camera.
Figure 6. Example of image input for the PRISM backward camera.
Remotesensing 09 00634 g006
Figure 7. Example of the dataset of CCD chip 3 in the backward and nadir cameras, containing translation values from the CCD chip centre to PP, the cameras’ focal lengths, the sizes of pixels in the CCD chips, and the PP coordinates.
Figure 7. Example of the dataset of CCD chip 3 in the backward and nadir cameras, containing translation values from the CCD chip centre to PP, the cameras’ focal lengths, the sizes of pixels in the CCD chips, and the PP coordinates.
Remotesensing 09 00634 g007
Figure 8. Effects of IOP related to the optical system: (a) change in the focal length, (b) scale variation and (c) symmetric radial lens distortion.
Figure 8. Effects of IOP related to the optical system: (a) change in the focal length, (b) scale variation and (c) symmetric radial lens distortion.
Remotesensing 09 00634 g008
Figure 9. Effects of IOP related to the CCD chips in the focal plane: (a) change of pixel dimension in the xs and ys directions; (b) CCD chip rotation; (c) CCD chip shift; and (d) CCD chip bending.
Figure 9. Effects of IOP related to the CCD chips in the focal plane: (a) change of pixel dimension in the xs and ys directions; (b) CCD chip rotation; (c) CCD chip shift; and (d) CCD chip bending.
Remotesensing 09 00634 g009
Figure 10. Location of the state of São Paulo in Brazil (a), location of the test field in the state of São Paulo (b), and the triplets n.1 and n.2 (c).
Figure 10. Location of the state of São Paulo in Brazil (a), location of the test field in the state of São Paulo (b), and the triplets n.1 and n.2 (c).
Remotesensing 09 00634 g010
Figure 11. Distribution of tie points in triplet n.1.
Figure 11. Distribution of tie points in triplet n.1.
Remotesensing 09 00634 g011
Figure 12. Distribution of ground control points and check points in triplets n.1 (a) and n.2 (b).
Figure 12. Distribution of ground control points and check points in triplets n.1 (a) and n.2 (b).
Remotesensing 09 00634 g012
Figure 13. Examples of a centroid in a PRISM image (a) and in an orthophoto (b).
Figure 13. Examples of a centroid in a PRISM image (a) and in an orthophoto (b).
Remotesensing 09 00634 g013
Figure 14. Workflow of the proposed methodology of experiment 2.
Figure 14. Workflow of the proposed methodology of experiment 2.
Remotesensing 09 00634 g014
Figure 15. Planimetric discrepancies in check points from experiments 1 (a) and 2 (b).
Figure 15. Planimetric discrepancies in check points from experiments 1 (a) and 2 (b).
Remotesensing 09 00634 g015
Figure 16. Altimetric discrepancies in the check points from experiments 1 (a) and 2 (b).
Figure 16. Altimetric discrepancies in the check points from experiments 1 (a) and 2 (b).
Remotesensing 09 00634 g016
Figure 17. Graphical representation of the planimetric and altimetric RMSE in experiments 1 and 2.
Figure 17. Graphical representation of the planimetric and altimetric RMSE in experiments 1 and 2.
Remotesensing 09 00634 g017
Figure 18. Planimetric discrepancies in check points from experiments 3 (a) and 4 (b).
Figure 18. Planimetric discrepancies in check points from experiments 3 (a) and 4 (b).
Remotesensing 09 00634 g018
Figure 19. Altimetric discrepancies in check points obtained in experiments 3 (a) and 4 (b).
Figure 19. Altimetric discrepancies in check points obtained in experiments 3 (a) and 4 (b).
Remotesensing 09 00634 g019
Figure 20. Graphical representation of planimetric and altimetric RMSE from experiments 3 and 4.
Figure 20. Graphical representation of planimetric and altimetric RMSE from experiments 3 and 4.
Remotesensing 09 00634 g020
Table 1. Focal lengths of the backward, nadir and forward cameras.
Table 1. Focal lengths of the backward, nadir and forward cameras.
Focal Lengths
Backward camera1999.8762715 mm
Nadir camera1999.8630195 mm
Forward camera2000.0645632 mm
Table 2. ALOS PRISM image processing levels and characteristics.
Table 2. ALOS PRISM image processing levels and characteristics.
Processing Levels and Characteristics
Level 1APRISM raw data extracted from the Level 0 data, expanded and generated lines. Ancillary information such as radiometric information, etc., required for processing.
Level 1B1Data with performed radiometric correction and added absolute calibration coefficient.
Level 1B2GData with performed geometric correction to Level 1B1 data. Geo coded images oriented to north.
Level 1B2RData with performed geometric correction to Level 1B1 data. Geo referenced images using orbital data.
Table 3. Quantities of ground control points and check points on each CCD chip.
Table 3. Quantities of ground control points and check points on each CCD chip.
Triplet n.1Triplet n.2
CCD chipBackward cameraCCD chipBackward camera
GCPCPGCPCP
160173
236208
39103117
444436
CCD chipNadir cameraCCD chipNadir camera
124133
21122712
3011346
413413
CCD chipForward cameraCCD chipForward camera
144163
21142712
3011348
421431
Table 4. Main characteristics of the four experiments.
Table 4. Main characteristics of the four experiments.
Experiments and Yours Characteristics
Experiment 1Bundle adjustment of triplet n.1 images without estimation of IOP.
Experiment 2Bundle adjustment of triplet n.1 images with estimation of IOP.
Experiment 3Bundle adjustment of triplet n.2 images without the use of IOP estimated in experiment 2.
Experiment 4Bundle adjustment of triplet n.2 images using the IOP estimated in experiment 2.
Table 5. Estimated significant IOP values of the backward, nadir and forward cameras and their standard deviations from experiment 2.
Table 5. Estimated significant IOP values of the backward, nadir and forward cameras and their standard deviations from experiment 2.
IOPBackward Camera
a0_chip1, σ (mm)−0.0386, 0.0167
b0_chip1, σ (mm)0.0688, 0.0160
b0_chip3, σ (mm)−0.0120, 0.0058
a0_chip4, σ (mm)0.0145, 0.0097
Δf, σ (mm)2.1501, 1.7158
IOPNadir camera
a0_chip1, σ (mm)0.0281, 0.0077
b0_chip3, σ (mm)0.0247, 0.0129
b0_chip4, σ (mm)0.0637, 0.0289
δ_chip2, σ (rad)3.96 × 10−6, 1.64 × 10−6
Δf, σ (mm)2.4863, 1.3128
IOPForward camera
a0_chip3, σ (mm)0.0506, 0.0087
b0_chip3, σ (mm)−0.0301, 0.0127
b0_chip5, σ (mm)0.0327, 0.0142
b0_chip6, σ (mm)0.0642, 0.0272
Δf, σ (mm)1.7388, 1.2220
Table 6. Correlation between the IOP in experiment 2.
Table 6. Correlation between the IOP in experiment 2.
Backward Camera
a0_chip1b0_chip1b0_chip3a0_chip4Δf
a0_chip11.00
b0_chip10.161.00
b0_chip3−0.190.221.00
a0_chip4−0.95−0.160.171.00
Δf−0.350.420.370.341.00
Nadir Camera
a0_chip1b0_chip3b0_chip4δ_chip2Δf
a0_chip11.00
b0_chip30.411.00
b0_chip40.540.811.00
δ_chip20.800.280.371.00
Δf0.000.140.18−0.021.00
Forward Camera
a0_chip3b0_chip3b0_chip5b0_chip6Δf
a0_chip31.00
b0_chip3−0.881.00
b0_chip5−0.760.891.00
b0_chip6−0.810.870.891.00
Δf−0.560.620.550.581.00
Table 7. Mean and root mean square errors of the discrepancies obtained in experiments 1 and 2.
Table 7. Mean and root mean square errors of the discrepancies obtained in experiments 1 and 2.
Experiment 1
ΔXLΔYLΔZL
mean (m)−0.0776−0.0028−3.3045
RMSE (m)1.48081.82795.1204
Experiment 2
ΔXLΔYLΔZL
mean (m)−0.2712−0.4271−1.5886
RMSE (m)1.23761.16833.7642
Table 8. High correlations between the IOP and EOP in experiment 2.
Table 8. High correlations between the IOP and EOP in experiment 2.
Backward Camera
κ0
a0_chip10.92
a0_chip4−0.95
Table 9. Mean and root mean square errors of the discrepancies obtained in experiments 3 and 4.
Table 9. Mean and root mean square errors of the discrepancies obtained in experiments 3 and 4.
Experiment 3
ΔXLΔYLΔZL
mean (m)−0.2936−1.57431.9034
RMSE (m)2.85002.17045.4814
Experiment 4
ΔXLΔYLΔZL
mean (m)−0.31600.57420.2137
RMSE (m)1.41022.19953.7334

Share and Cite

MDPI and ACS Style

Rodrigues, T.L.; Mitishita, E.; Ferreira, L.; Tommaselli, A.M.G. Accuracy Improvements in the Orientation of ALOS PRISM Images Using IOP Estimation and UCL Kepler Platform Model. Remote Sens. 2017, 9, 634. https://doi.org/10.3390/rs9070634

AMA Style

Rodrigues TL, Mitishita E, Ferreira L, Tommaselli AMG. Accuracy Improvements in the Orientation of ALOS PRISM Images Using IOP Estimation and UCL Kepler Platform Model. Remote Sensing. 2017; 9(7):634. https://doi.org/10.3390/rs9070634

Chicago/Turabian Style

Rodrigues, Tiago L., Edson Mitishita, Luiz Ferreira, and Antonio M. G. Tommaselli. 2017. "Accuracy Improvements in the Orientation of ALOS PRISM Images Using IOP Estimation and UCL Kepler Platform Model" Remote Sensing 9, no. 7: 634. https://doi.org/10.3390/rs9070634

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop