Next Article in Journal
Photoacoustic Techniques for Trace Gas Sensing Based on Semiconductor Laser Sources
Previous Article in Journal
Data Centric Sensor Stream Reduction for Real-Time Applications in Wireless Sensor Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bundle Block Adjustment with 3D Natural Cubic Splines

Department of Civil and Environmental Engineering, Seoul National University, 599 Gwanak-ro, Gwanak-gu, Seoul 151-742, Korea
*
Author to whom correspondence should be addressed.
Sensors 2009, 9(12), 9629-9665; https://doi.org/10.3390/s91209629
Submission received: 12 October 2009 / Accepted: 15 November 2009 / Published: 2 December 2009
(This article belongs to the Section Remote Sensors)

Abstract

:
Point-based methods undertaken by experienced human operators are very effective for traditional photogrammetric activities, but they are not appropriate in the autonomous environment of digital photogrammetry. To develop more reliable and accurate techniques, higher level objects with linear features accommodating elements other than points are alternatively adopted for aerial triangulation. Even though recent advanced algorithms provide accurate and reliable linear feature extraction, the use of such features that can consist of complex curve forms is more difficult than extracting a discrete set of points. Control points that are the initial input data, and break points that are end points of segmented curves, are readily obtained. Employment of high level features increases the feasibility of using geometric information and provides access to appropriate analytical solutions for advanced computer technology.

1. Introduction

One of the major tasks in digital photogrammetry is to determine the orientation parameters of aerial images quickly and accurately, which involves the two primary steps of interior and exterior orientation. While the original aerial photography provides the interior orientation parameters, the problem remains to determine the exterior orientation with respect to the object coordinate system. Exterior orientation establishes the position of the camera projection center in the ground coordinate system and the three rotation angles of the camera axis represent the transformation between the image and the object coordinate system. Exterior orientation parameters (EOPs) for a stereo model consisting of two aerial images can be obtained using relative and absolute orientation. This is a fundamental task in many applications such as surface reconstruction, orthophoto generation, image registration, and object recognition. The EOPs of multiple overlapping aerial images can be computed using a bundle block adjustment. The position and orientation of each exposure station are obtained by bundle block adjustments using collinearity equations that are linearized as having an unknown position and orientation with the object space coordinate system.
The program for bundle block adjustment in most softcopy workstations employs point features as the control information. Photogrammetric triangulation using digital photogrammetric workstations is more automated than aerial triangulation using analog instruments because the stereo model can be directly set using analytical triangulation outputs. Bundle block adjustment reduces the cost of field surveying in difficult areas and verifies the accuracy of field observations during the adjustment process. Even though each stereo model requires at least two horizontal and three vertical control points, this method can reduce the number of control points with accurate orientation parameters. EOPs of all the photographs in the target area are determined by the adjustment, which improves the accuracy and reliability of photogrammetric tasks. Because object reconstruction is processed by an intersection employing more than two images, bundle block adjustment provides the redundancy for the intersection geometry and contributes to the elimination of the gross error in the recovery of EOPs.
A stereo model consisting of two images with 12 EOPs is a common orientation unit. The mechanism of object reconstruction from a stereo model is comparable with that of an animal or human visual system. The principle aspects of the human vision system, including its neurophysiology, anatomy, and visual perception, are well described in Schenk [1]. A point-based procedure relationship between point primitives is widely developed in traditional photogrammetry, such that one measured point on an image is identified in another image. Even for linear features, data for a stereo model in a softcopy workstation is collected as points so that further application and analysis relies on points as primary input data units. The coefficients of interior, relative, and absolute orientation are computed from the point relationship. Interior orientation compensates for lens distortion, film shrinkage, scanner error, and atmosphere refraction. Relative orientation makes the stereoscopic view possible, and the relationship between a model coordinate system and an object space coordinate system is reconstructed by absolute orientation. Ground control points (GCPs) are widely employed to compute orientation parameters. Although the use of many GCPs is a time-consuming procedure and inhibits the robust and accurate automation that research into digital photogrammetry aims to achieve, the deployment of a computer, storage capacity, photogrammetric software, and a digital camera can reduce the computational and time complexity.
Employing high level features increases the feasibility of gaining geometric information and provides a suitable analytical situation for advanced computer technology. With advancing development in the extraction, segmentation, classification, and recognition of features, the input data for feature-based photogrammetry has been expanded at the expense of a redundancy in the application of aerial triangulation. Because the identification, formulation, and application of reasonable linear features is a crucial procedure for autonomous photogrammetry, higher order geometric feature-based modeling plays an important role in modern digital photogrammetry. The digital image format is suited to this purpose, especially in feature extraction and measurement, and it is useful for precise and rigorous modeling of features from images.

2. Line Photogrammetry

2.1. Overview of Line Photogrammetry

Line photogrammetry refers to applications such as single photo resection, relative orientation, triangulation, image matching, image registration, and surface reconstruction, which are implemented using linear features and the correspondence between linear features rather than points. Interest conjugate points such as edge points, corner points, and points on parking lanes operate well for determining EOPs with respect to the object space coordinate frame in traditional photogrammetry. The most well-known edge and interest point detectors are the Canny [2], Förstner [3], Harris, otherwise well-known as the Plessy detector [4], Moravec [5], Prewitt [6], Sobel [7], and SUSAN [8] detectors. The Canny, Prewitt, and Sobel operators are edge detectors and the Förstner, Harris, and SUSAN operators are corner detectors. Other well-known corner detection algorithms are the Laplacian of Gaussian, the difference of Gaussians, and the determinant of Hessian. Interest point operators that detect well-defined points, edges, and corners play an important role in automated triangulation and stereo matching. For example, the Harris operator is defined as a measurement of corner strength as:
H ( x , y ) = det ( M ) α ( trace ( M ) ) 2
where M is the local structure matrix and α is a parameter so that 0 ≤ α ≤ 0.25. A default value is 0.04. The gradients of the x and y directions are:
g x = I x , g y = I y
with I as an image. The local structure matrix M is:
M = [ A C C B ]
where A = g x 2, B = g y 2 and C = gxgy.
A corner is detected when:
H ( x , y ) > H thr
where Hthr is the threshold parameter on corner strength. The Harris operator searches points where variations in two orthogonal directions are large using the local autocorrelation function and provides good repeatability under varying rotation, scale, and illumination. The Förstner corner detector is also based on the covariance matrix for the gradient at a target point.
Marr [9] proposes the zero-crossing edge point detector utilizing second order rather than first order directional derivatives. The maximum of first order derivatives indicates the location of an edge whereas it is the zero of second order derivatives that indicates an edge. Physical boundaries of objects are easily detected because the gray levels change abruptly in boundaries. Because no single operator exists in edge detection, several criteria are required for each specific application. Matching point features present large percentages of match errors because point features are ambiguous and an analytical solution for point matching is not yet developed. Because of the geometric information and symbolic meaning of linear features, matching them is more reliable than matching point features in the autonomous environment of digital photogrammetry. As the use of linear features does not require a point-to-point correspondence, the matching of linear features is more flexible than that for points.
A number of researchers have published studies on automatic feature extraction and its application for various photogrammetric tasks: Förstner [10], Hannah [11], Schenk et al. [12], Schenk and Toth [13], Ebner and Ohlhof [14], Ackerman and Tsingas [15], Haala and Vosselman [16], Drewniok and Rohr [17,18], Zahran [19], and Schenk [20]. However, point-based photogrammetry based on manual measurement and the identification of interest points is not compatible with the autonomous environment of digital photogrammetry, but is a labor-intensive interpretation with the limitations of occlusion, ambiguity, and semantic information when compared with appropriate robust automation.
Because point features do not provide explicit geometric information, geometrical knowledge is achieved by perceptual organization [21-24]. Perceptual organization derives features and structures from imagery without a prior knowledge of the geometric, spectral, or radiometric properties of features and is a required step in object recognition. Perceptual organization is an intermediate level process for various vision tasks such as target-to-background discrimination, object recognition, target cueing, motion-based grouping, surface reconstruction, image interpretation, and change detection. Because objects cannot be distinguished by one gray level pixel, an image must be investigated entirely to obtain perceptual information. The most recent research related to perceptual organization concerns the 2D image implementation at signal, primitive, and structural levels.
In general, grouping or segmentation has the same meaning as perceptual organization in computer vision. This segmentation is typically addressed by two approaches, a model-based method (top-down approach) and a data-based method (bottom-up approach), and many researchers have employed edges and regions in segmentation. In the edge-based approaches, edges are likened to general forms of linear features without discontinuities, and in region-based approaches, iterative region growing techniques using seed points are preferred for surface fitting. Model-based methods require domain knowledge for each specific application in a manner similar to the human visual system, whereas data-based methods employ data properties for data recognition in a global fashion. In data-based methods, the same invariant properties in different positions and orientations are combined into the same regions or the same features. One approach alone, however, cannot guarantee consistent quality so combined approaches are implemented to minimize error segmentation.
Symbolic representation using distinct points is difficult because interest points contain no explicit information about physical reality. While traditional photogrammetric techniques obtain the camera parameters from the correspondence between 2D and 3D points, a more general and reliable process is required for advanced computer technology such as the adoption of linear features. Line photogrammetry is superior in higher level tasks such as object recognition and automation as compared with point-based photogrammetry, but selection of the correct candidate linear features is a complicated process. The development of the algorithm from point- to line-based photogrammetry uses the advantages of both approaches. The selection of suitable features is easier than the extraction of straight linear features and the candidate feature can be used in higher applications. A reason for developing curve features is that they will be prior to, and a fundamental aspect of, the next highest features such as surfaces, areas, and 3D volumes that consist of free-form linear features. Line-based photogrammetry is most suitable in the development of robust and accurate techniques for automation. If linear features are employed as control features, they provide advantages over points in the automation of aerial triangulation. Photogrammetry based on the manual measurement and identification of conjugate points is less reliable than line-based photogrammetry because it has the limitations of occlusion (visibility), ambiguity (repetitive patterns), and semantic information when considering the need for reliable and effective automation. The manual identification of corresponding entities within two images is crucial in the automation of point based photogrammetric tasks. No knowledge of the point-to-point correspondence is required in line-based photogrammetry. In addition, point features do not carry information about the scene whereas linear features contain the semantic information related to real object features. Additional information concerning linear features can increase the redundancy of the point system.

2.2. Literature Review

A review of related works begins with those using methods of pose estimation in imagery based on linear features that appear in most man-made objects such as buildings, roads, and parking lots. Over the years, a number of researchers in photogrammetry and computer vision have used line instead of point features; for example, Masry [25], Heikkila [26], Kubik [27], Petsa and Patias [28], Gülch [29], Wiman and Axelsson [30], Chen and Shibasaki [31], Habib [32], Heuvel [33], Tommaselli [34], Vosselman and Veldhuis [35], Förstner [36], Smith and Park [37], Schenk [38], Tangelder et al. [39], and Parian and Gruen [40]. Mulawa and Mikhail [41] originally proved the feasibility of linear features for close-range photogrammetric applications such as space intersection and resection, and relative and absolute orientation. This was the first step in employing linear feature-based methods in close-range photogrammetric applications. Mulawa [42] later developed linear feature-based methods for different sensors.
Whereas straight linear features and conic sections can be represented as unique mathematical expressions, free-form lines in nature cannot be described by algebraic equations. Hence, Mikhail and Weerawong [43] used splines and polylines to represent free-form lines as analytical expressions. Tommaselli and Tozzi [44] proposed that the accuracy of the straight line parameter be a subpixel with the representation of four degrees of freedom in an infinite line. Many researchers in photogrammetry have described straight lines as infinite lines using minimal representation to reduce unknown parameters. The main consideration in straight line expression is in the singularities. Habib et al. [45] made a bundle block adjustment using a 3D point set lying on control linear features instead of traditional control points. EOPs were reconstructed hierarchically employing automatic single photo resection (SPR).
Habib et al. [46] summarized linear features extracted from a mobile mapping system, a GIS database, and maps for various photogrammetric applications such as SPR, triangulation, digital camera calibration, image matching, 3D reconstruction, image to image registration, and surface to surface registration. In their work, matched linear feature primitives were utilized in space intersection for the reconstruction of object space features, and linear features in the object space were used as control features in triangulation and digital camera calibration.
Mikhail [47] and Habib et al. [48] accomplished the geometrical modeling and the perspective transformation of linear features within a triangulation process. Linear features were used to recover relative orientation parameters. Habib et al. proposed a free-form line in object space by a sequence of 3D points along the object space line.
Schenk [49] extended the concept of aerial triangulation from point features to linear features. The line equation of six dependent parameters replaced the point-based collinearity equation:
X = X A + t a Y = Y A + t b Z = Z A + t c
where a real variable is t, the start point (XA,YA,ZA), and the direction vector (a,b,c).
The traditional point-based collinearity equation was extended to line features:
x p = f ( X A + t a X C ) r 11 + ( Y A + t b Y C ) r 12 + ( Z A + t c Z C ) r 13 ( X A + t a X C ) r 31 + ( Y A + t b Y C ) r 32 + ( Z A + t c Z C ) r 33 y p = f ( X A + t a X C ) r 21 ( Y A + t b Y C ) r 22 ( Z A + t c Z C ) r 23 ( X A + t a X C ) r 31 ( Y A + t b Y C ) r 22 + ( Z A + t c Z C ) r 33
with xp,yp as photo coordinates, f the focal length, XC,YC,ZC the camera perspective center, and rij the elements of the 3D orthogonal rotation matrix. The extended collinearity equation with six parameters was derived as the line expression of four parameters (ô,θ,x0,y0) because a 3D straight line has only four independent parameters. Two constraints are required to solve a common form of the 3D straight equations using six parameters determined by two vectors:
[ X Y Z ] = [ cos θ cos ϕ x 0 sin ϕ y 0 + sin θ cos ϕ z cos θ sin ϕ x 0 + cos ϕ y 0 + sin θ sin ϕ z sin θ x 0 + cos θ z ]
where z is a real variable. The advantage of the 3D straight line using four independent parameters is that it reduces the computation and time complexity in processes such as bundle block adjustment. The collinearity equation as the straight line function of four parameters was developed:
x p = f ( X X C ) r 11 ( Y Y C ) r 12 + ( Z Z C ) r 13 ( X X C ) r 31 + ( Y Y C ) r 32 + ( Z Z C ) r 33 y p = f ( X X C ) r 21 + ( Y Y C ) r 22 + ( Z Z C ) r 23 ( X X C ) r 31 + ( Y Y C ) r 32 + ( Z Z C ) r 33
where X, Y, and Z were defined in equation (7).
Zalmanson [50] updated EOPs using the correspondence between the parametric control free-form line in object space and the projected 2D free-form line in image space. The hierarchical approach, the modified iteratively close point (ICP) method, was developed to estimate curve parameters. The ray lies on the free-form line whose parametric equation represented by one parameter follows. Besl and McKay [51] employed the ICP algorithm to solve a matching problem of point sets, free-form curves, surfaces, and terrain models in 2D and 3D space. In their work, an ICP algorithm was executed without prior knowledge of the correspondence between points. The ICP method affected the Zalmanson's dissertation on the development of the recovery of EOPs using 3D free-form lines in photogrammetry. Euclidean 3D transformation was then employed in a search for the closest entity in the geometric data set. Rabbani et al. [52] utilized the ICP method in the registration of Lidar point clouds to divide them into four categories (spheres, planes, cylinders, and tori) by direct and indirect methods.
Ξ ( l ) = [ X ( l ) Y ( l ) Z ( l ) ] = [ X 0 k Y 0 k Z 0 k ] + [ ρ 1 ρ 2 ρ 3 ] l
where X0, Y0, Z0, ω, φ, κ are the EOPs and ρ1, ρ2, ρ3 are the direction vector.
The parametric curve Γ(t) = [X(t) Y(t) Z(t)]T was obtained by minimizing the Euclidian distance between two parametric curves:
Φ ( t , l ) Γ ( t ) Ξ ( l ) 2 = ( X ( t ) X 0 ρ 1 l ) 2 + ( Y ( t ) Y 0 ρ 2 l ) 2 + ( Z ( t ) Z 0 ρ 3 l ) 2
Φ(t,l) had a minimum value at ∂Φ/∂l = ∂Φ/∂t = 0 with two independent variables l and t as in (11).
Φ / l = 2 ρ 1 ( X ( t ) X 0 ρ 1 l ) 2 ρ 2 ( Y ( t ) Y 0 ρ 2 l ) 2 ρ 3 ( Z ( t ) Z 0 ρ 3 l ) = 0 Φ / t = 2 X ( t ) ( X ( t ) X 0 ρ 1 l ) + 2 Y ( t ) ( Y ( t ) Y 0 ρ 2 l ) + 2 Z ( t ) ( Z ( t ) Z 0 ρ 3 l ) = 0
Akav et al. [53] employed planar free-form curves for aerial triangulation with the ICP method. Because the effect of the Z parameter as compared with that of X or Y was large in a normal plane equation aX + bY + cZ = 1, a different plane representation was developed to avoid numerical problems:
[ n 1 n 2 n 3 ] = [ sin θ cos φ sin θ sin φ cos φ ] n 1 ( X X 0 ) + n 2 ( Y Y 0 ) + n 3 ( Z Z 0 ) = 0 n 1 X + n 2 Y + n 3 Z = D
with θ the angle from the XY plane, φ the angle around the Z axis, n the unit vector of plane normal, and D the distance between the plane and the origin. Five relative orientation parameters and three planar parameters were obtained by using the homography mapping system, which searched for the conjugate point in an image corresponding to a point in the other image.
Lin [54] proposed the method of autonomous recovery of exterior orientation parameters by an extension of the traditional point-based modified iterated Hough transform (MIHT) to the 3D free-form linear feature-based MIHT. Straight polylines were generalized for matching primitives in the pose estimation because the mathematical representation of straight lines is much clearer than the algebraic expression of conic sections and splines.
Gruen and Akca [55] matched 3D curves whose forms were defined by a cubic spline using matching least squares. Subpixels were localized by the matching, and the quality of the localization was decided by the geometry of image patches. Two free-form lines were defined as:
f ( u ) = [ x ( u ) y ( u ) z ( u ) ] T = a 0 + a 1 u + a 2 u 2 + a 3 u 3 g ( u ) = [ x ( u ) y ( u ) z ( u ) ] T = b 0 + b 1 u + b 2 u 2 + b 3 u 3
where u ∈ [0,1],a0, a1, a2, a3, b0, b1, b2, b3 are variables and the f(u), g(u) ∈ R3.
The Taylor expansion was employed to adopt the Gauss–Markov model:
f ( u ) e ( u ) = g ( u ) f ( u ) e ( u ) = g 0 ( u ) + g 0 ( u ) u d u f ( u ) e ( u ) = g 0 ( u ) + g 0 ( u ) u u x d x + g 0 ( u ) u u y d y + g 0 ( u ) u u z

3. Bundle Block Adjustment with 3D Natural Cubic Splines

3.1. 3D Natural Cubic Splines

The choice of the right feature model is important in the development of a feature-based approach because an ambiguous feature representation leads to unstable adjustment. A spline is a piecewise polynomial function in the n of vector graphics. Splines are widely used for data fitting in computer science because of the resultant simplicity in curve reconstruction. Complex figures are well approximated through curve fitting and a spline lends strength to the accuracy evaluation, data interpolation, and curve smoothing. One of the important properties of a spline is that it can easily be morphed. A spline represents a 2D or 3D continuous line within a sequence of pixels and segmentation. The relationship between pixels and lines is applied to a bundle block adjustment or a functional representation. A spline of degree 0 is the simplest spline, a linear spline has degree 1, a quadratic spline has degree 2, and a common natural cubic spline has degree 3 with continuity C2. The geometrical meaning of continuity C2 is that the first and second derivatives are proportional at joint points and the parametric importance of continuity C2 is that the first and second derivatives are equal at connected points.
The number of break points that are the determination of a set of piecewise cubic functions varies depending upon the spline parameters. A natural cubic degree guarantees a second-order continuity, which means that the first and second order derivatives of two consecutive natural cubic splines are continuous at the break point. The intervals for a natural cubic spline do not need to be the same as the distance of every two consecutive data points. The best intervals are chosen by a least squares method. In general, the total number of break points is less than that of original input points. The algorithm of a natural cubic spline is as below.
  • Generate the break point (control point) set for the spline of the original input data.
  • Calculate the maximum distance between the approximated spline and the original input data while the maximum distance >the threshold of the maximum distance.
  • Add the break point to the break point set at the location of the maximum distance.
  • Compare the maximum distance with the threshold.
A larger threshold makes for more break points with a more accurate spline to the original input data. Npiecewise cubic polynomial functions between two adjacent break points are defined from the N +1 break points. There is a separate cubic polynomial for each segment with its own coefficients.
X 0 ( t ) = a 00 + a 01 t + a 02 t 2 + a 03 t 3 , t [ 0 , 1 ] X 1 ( t ) = a 10 + a 11 t + a 12 t 2 + a 13 t 3 , t [ 0 , 1 ] X i ( t ) = a i 0 + a i 1 t + a i 2 t 2 + a i 3 t 3 , t [ 0 , 1 ] Y i ( t ) = b i 0 + b i 1 t + b i 2 t 2 + b i 3 t 3 , t [ 0 , 1 ] Z i ( t ) = c i 0 + c i 1 t + c i 2 t 2 + c i 3 t 3 , t [ 0 , 1 ]
The strength of this approach is that segmented lines represent a free-form line with analytical parameters. The number of break points is reduced and the input error should be absorbed by a mathematical model, especially in the expression of points on a straight line. A natural cubic spline is a data-independent curve fitting. The disadvantage is that the whole curve shape depends on all of the passing points, and changing any one of these alters the entire curve.
The correspondence between the 3D curve in the object space coordinate system and its projected 2D curve in the image coordinate system is implemented using an accommodating natural cubic spline curve feature because of its boundary conditions that retain zero second derivatives at the end points. A natural cubic spline is composed of a sequence of cubic polynomial segments as in Figure 1 with x0,x1,…,xn as the n + 1 control points and X0,X1,…,Xn-1 as the ground coordinates of n segments.

3.2. Extended Collinearity Equation Model for Splines

Collinearity equations are the commonly used condition equations to determine relative orientation. The space intersection calculates a point location in object space using the projection ray intersection from two or more images, and the space resection determines the coordinates of a point on an image and the EOPs with respect to the object space coordinate system. The space intersection and the space resection are the fundamental operations in photogrammetry for further processes such as triangulation. The basic concept of the collinearity equation is that all points on the image, the perspective center, and the corresponding point in the object space are all on a straight line. The relationship between the image and object coordinate systems is expressed by three position parameters and three orientation parameters. Collinearity equations play an important role in photogrammetry because each control point in object space produces two collinearity equations for every photograph in which the point appears. If m points appear in n images, then 2mn collinearity equations can be employed in the bundle block adjustment. The extended collinearity equations relating a natural cubic spline in object space with ground coordinates (Xi(t),Yi(t),Zi(t)) with image space having photo coordinates (xpi,ypi) are seen as (16). A natural cubic spline allows the utilization of the collinearity model for expressing orientation parameters and curve parameters as below:
x p i = f ( X i ( t ) X C ) r 11 + ( Y i ( t ) Y C ) r 12 + ( Z i ( t ) Z C ) r 13 ( X i ( t ) X C ) r 31 + ( Y i ( t ) Y C ) r 32 + ( Z i ( t ) Z C ) r 33 y p i = f ( X i ( t ) X C ) r 21 + ( Y i ( t ) Y C ) r 22 + ( Z i ( t ) Z C ) r 23 ( X i ( t ) X C ) r 31 + ( Y i ( t ) Y C ) r 32 + ( Z i ( t ) Z C ) r 33
with (xpi,ypi) as the photo coordinate, f the focal length, XC,YC,ZC the camera perspective center, and rij the elements of the 3D orthogonal rotation matrix RT by the angular elements (ω,φ,κ) of EOPs.
The extended collinearity equations can be written as follows:
x p = f u w , y p = f v w [ u v w ] = R T ( ω , φ , κ ) [ X i ( t ) X C Y i ( t ) Y C Z i ( t ) Z C ]
To recover the 3D natural cubic spline parameters and the exterior orientation parameters in a bundle block adjustment, a nonlinear mathematical model of the extended collinearity equation is differentiated. The models of exterior orientation recovery are classified into linear and nonlinear methods. Whereas linear methods decrease the computation load, the accuracy and reliability of linear algorithms are not excellent. Nonlinear methods are more accurate and predictable. However, nonlinear methods require initial estimates and they increase the computational complexity. The relationship between a point in image space and a corresponding point in object space is established by the extended collinearity equation. Prior knowledge of the correspondences between individual points in the 3D object space and their projected features in the 2D image space is not required in extended collinearity equations with 3D natural splines. One point on a cubic spline has 19 parameters (XC,YC,ZC,ω,φ,κ,a0,a1,a2,a3,b0,b1,b2,b3,c0,c1,c2,c3,t). The differentials of (17) are derived by (18):
d x p = f w d u + f u w 2 d w , d y p = f w d v + f v w 2 d w
with differentials of du, dv, and dw (19).
[ d u d v d w ] = R T ω [ X i ( t ) X C Y i ( t ) Y C Z i ( t ) Z C ] d ω + R T φ [ X i ( t ) X C Y i ( t ) Y C Z i ( t ) Z C ] d φ + R T κ [ X i ( t ) X C Y i ( t ) Y C Z i ( t ) Z C ] d κ R T [ 1 0 0 ] d X C R T [ 0 1 0 ] d Y C R T [ 0 0 1 ] d Z C + R T [ 1 0 0 ] d a 0 + R T [ t 0 0 ] d a 1 + R T [ t 2 0 0 ] d a 2 + R T [ t 3 0 0 ] d a 3 + R T [ 0 1 0 ] d b 0 R T [ 0 t 0 ] d b 1 R T [ 0 t 2 0 ] d b 2 + R T [ 0 t 3 0 ] d b 3 + R T [ 0 0 1 ] d c 0 + R T [ 0 0 t ] d c 1 + R T [ 0 0 t 2 ] d c 2 + R T [ 0 0 t 3 ] d c 3 + R T [ a 1 + 2 a 2 t + 3 a 3 t 2 b 1 + 2 b 2 t + 3 b 3 t 2 c 1 + 2 c 2 t + 3 c 3 t 2 ] d t
Substituting du, dv, and dw in (18) by the expressions found in (19) leads to:
d x p = M 1 d X C + M 2 d Y C + M 3 d Z C + M 4 d ω + M 5 d φ + M 6 d κ + M 7 d a 0 + M 8 d a 1 + M 9 d a 2 + M 10 d a 3 + M 11 d b 0 + M 12 d b 1 + M 13 d b 2 + M 14 d b 3 + M 15 d c 0 + M 16 d c 1 + M 17 d c 2 + M 18 d c 3 + M 19 d t d y p = N 1 d X C + N 2 d Y C + N 3 d Z C + N 4 d ω + N 5 d φ + N 6 d κ + N 7 d a 0 + N 8 d a 1 + N 9 d a 2 + N 10 d a 3 + N 11 d b 0 + N 12 d b 1 + N 13 d b 2 + N 14 d b 3 + N 15 d c 0 + N 16 d c 1 + N 17 d c 2 + N 18 d c 3 + N 19 d t
M1,…M19,N1,…N19 denotes the partial derivatives of the extended collinearity equation for curves. The linearized extended collinearity equations by Taylor expansion, ignoring the 2nd and higher order terms, can be written as follows:
x p + f u 0 w 0 = M 1 d X C + M 2 d Y C + M 3 d Z C + M 4 d ω + M 5 d φ + M 6 d κ + M 7 d a 0 + M 8 d a 1 + M 9 d a 2 + M 10 d a 3 + M 11 d b 0 + M 12 d b 1 + M 13 d b 2 + M 14 d b 3 + M 15 d c 0 + M 15 d c 0 + M 16 d c 1 + M 17 d c 2 + M 18 d c 3 + M 19 d t + e x y p + f v 0 w 0 = N 1 d X C + N 2 d Y C + N 3 d Z C + N 4 d ω + N 5 d φ + N 6 d κ + N 7 d a 0 + N 8 d a 1 + N 9 d a 2 + N 10 d a 3 + N 11 d b 0 + N 12 d b 1 + N 13 d b 2 + N 14 d b 3 + N 15 d c 0 + N 16 d c 1 + N 17 d c 2 + N 18 d c 3 + N 19 d t + e y
with u0,v0,w0 being the approximate parameters by ( X C 0 , Y C 0 , Z C 0 , ω 0 , φ 0 , κ 0 , a 0 0 , a 1 0 , a 2 0 , a 3 0 , b 0 0 , b 1 0 , b 2 0 , b 3 0 , c 0 0 , c 1 0 , c 2 0 , c 3 0 , t 0 ), and ex,ey the stochastic errors of xp,yp, the observed photo coordinates with zero expectation, respectively. Orientation parameters including the 3D natural cubic spline parameters are expected to recover correctly because the extended collinearity equations with these splines increase redundancy.

3.3. Arc-Length Parameterization of 3D Natural Cubic Splines

The assumption made in bundle block adjustment by the Gauss–Markov model is that all the estimated parameters are uncorrelated. Hence, the design matrix of the adjustment must be full rank, nonsingular, and normal. However, because the spline parameters are not independent of their location parameters, additional observations are required to obtain parameter estimations. In a point-based approach, the point location relationship between image and object space is established for the pose estimation to include the fundamental camera position and orientation, the remote sensing, and the computer vision. The coordinates of a point are necessary for the space intersection and resection. To remove any rank deficiency caused by datum defects in point-based photogrammetry, some constraints are adopted to estimate the unknown parameters. The most common constraints are coplanarity, symmetry, perpendicularity, and parallelism. The minimum number of constraints is equal to the rank deficiency of the system. Inner constraints are often used in a photogrammetric network, which can be applied to both the object features and the camera orientation parameters. Angle or distance condition equations provide information on the relativity between observations in object space and points in image space. Absolute information can be obtained from the fixed control points.
In this research, an arc-length parameterization is applied as an additional condition equation to solve the rank deficient problem in extended collinearity equations using 3D natural cubic splines. The concept of differentiable parameterization is that the arc length of a curve can be divided into minute pieces and these can be summed such that each piece will be approximately linear. The sum of the squares of derivatives is the same with a velocity vector because a parametric curve can be considered as a point trajectory. A velocity vector describes the path of a curve and movement characteristics. If a particle on a curve moves at a constant rate, the curve is parameterized by the arc length. While the extended collinearity equation provides the only information, curves have additional geometric constraints such as arc length, tangent of location, and curvature. These support space resection under the assumption of properly accounting for additional independent observations in both the image and object space. Arc length in object space is determined by a geometric integration using a construction from the differentiable parameterization of a spline. Arc length in image space is calculated by a geometric integration of a construction from the differentiable parameterization of the photo coordinates derived from a spline in the object space:
Arc ( t ) = t i t i + 1 ( x p ( t ) ) 2 + ( y p ( t ) ) 2 d t = t i t i + 1 { ( f u ( t ) w ( t ) ) } 2 + { ( f v ( t ) w ( t ) ) } 2 d t = t i t i + 1 ( f u ( t ) w ( t ) u ( t ) w ( t ) w 2 ( t ) ) + ( f v ( t ) w ( t ) v ( t ) w ( t ) w 2 ( t ) ) 2 d t
where f is the focal length and:
[ u ( t ) v ( t ) w ( t ) ] = R T ( ω , φ , κ ) [ a 0 + a 1 t + a 2 t 2 + a 3 t 3 X C b 0 + b 1 t + b 2 t 2 + b 3 t 3 Y C c 0 + c 1 t + c 2 t 2 + c 3 t 3 Z C ] [ u ( t ) v ( t ) w ( t ) ] = R T ( ω , φ , κ ) [ a 1 + 2 a 2 t + 3 a 3 t 2 b 1 + 2 b 2 t + 3 b 3 t 2 c 1 + 2 c 2 t + 3 c 3 t 2 ]
Because the problem of the arc-length parameterization of splines has no analytical solution, several numerical approximations of reparameterization techniques for splines or other curve representations have been developed. While most curves are not parameterized for arc length, the arc length of a B-spline can be reparameterized by adjusting its knots. Wang et al. [56] approximated the parameterized arc length of spline curves by generating a new curve that accurately approximated the original spline curve to reduce the computation complexity of the arc-length parameterization. They showed that the approximation of the arc-length parameterization works well in a variety of real-time applications including driving simulations.
Guenter and Parent [57] employed a hierarchical approach algorithm to develop a linear search arc-length subdivision table for parameterized curves to reduce the arc-length computation time. A table of the correspondence between parameter t and the arc length can be established to accelerate the arc-length computation. After dividing the parameter range into intervals, the arc length of each interval is computed for mapping parameters to the arc length. A reference table for various intervals of arc length can be developed. Another method of arc-length approximation is to use explicit functions such as the Bézier curve, which has advantages in fast function evaluations. Adaptive Gaussian integrations employ a recursive method that starts from a few samples and adds on more as necessary. Adaptive Gaussian integration also uses a table that maps curves or spline parameter values according to the arc-length values.
Nasri et al. [58] proposed an arc-length approximation method of circles and piecewise circular splines generated by control polygons or points using a recursive subdivision algorithm. While B-splines have various tangents over the curve depending upon the arc-length parameterization, circular splines have constant tangents whose vectors are useful in arc-length computation.
Simpson's rule is the numerical approximation of definite integrals. The geometric integration of the arc length in the image space can be calculated by this rule as follows:
Arc ( t ) = t 1 t 2 f ( t ) d t t 1 t 2 f ( t ) d t t 2 t 1 6 [ f ( t 1 ) + 4 f ( t 2 + t 1 2 ) + f ( t 2 ) ] f ( t ) = ( x p ( t ) ) 2 + ( y p ( t ) ) 2 = { ( f u ( t ) w ( t ) ) } 2 + { ( f v ( t ) w ( t ) ) } 2 = ( f u ( t ) w ( t ) u ( t ) w ( t ) w 2 ( t ) ) 2 + ( f v ( t ) w ( t ) v ( t ) w ( t ) w 2 ( t ) ) 2 d f = 1 2 ( f ( t ) ) 1 2 f [ 2 x p ( t ) w w 2 w 2 d u 2 x p ( t ) 1 w d u + 2 y p ( t ) w w 2 d v 2 y p ( t ) 1 w d v + { 2 x p ( t ) u w 2 ( u w u w ) 2 w w 4 2 y p ( t ) v w 2 ( v w 2 v w ) 2 w w 4 } d w + { 2 x p ( t ) u w 2 + 2 y p v w 2 } Arc ( t ) t 0 t 1 0 6 [ f ( t 1 0 ) + 4 f ( t 2 0 + t 1 0 2 ) + f ( t 2 0 ) ] = A 1 d X C + A 2 d Y C + A 3 d Z C + A 4 d ω + A 5 d φ + A 6 d κ + A 7 d a 0 + A 8 d a 1 + A 9 d a 2 + A 10 d a 3 + A 11 d b 0 + A 12 d b 1 + A 13 d b 2 + A 14 d b 3 + A 15 d c 0 + A 16 d c 1 + A 17 d c 2 + A 18 d c 3 + A 19 d t 1 + A 20 d t 2 + e a
with t 1 0 , t 2 0 , f ( t 0 ) being the approximate parameter using the following: ( X C 0 , Y C 0 , Z C 0 , ω 0 , φ 0 , κ 0 , a 0 0 , a 1 0 , a 2 0 , a 3 0 , b 0 0 , b 1 0 , b 2 0 , b 3 0 , c 0 0 , c 1 0 , c 2 0 , c 3 0 , t i 0 ) and with ea being the stochastic error of the arc length between two locations with zero expectation. A1,…A20 denote the partial derivatives of the arc-length parameterization of a 3D natural cubic spline.

3.4. Model Integration

The objective of bundle block adjustment is twofold, namely to calculate the exterior orientation parameters of a block of images and also the coordinates of the ground features in object space. In the determination of orientation parameters, additional interior conditions such as lens distortion, atmospheric refraction, and principal point offset can be obtained by self-calibration. In general, orientation parameters are determined by bundle block adjustment using a large number of control points. This establishment of control points, however, means expensive fieldwork, so an economical and accurate adjustment method is required. Linear features have several advantages to complement points in that they are useful for higher level tasks and they are easily extracted in man-made environments. The line photogrammetric bundle adjustment in this research aims at the estimation of exterior orientation parameters and 3D natural cubic spline parameters using the correspondence between splines in object space and spline observations of multiple images in image space. Nonlinear functions of orientation parameters, spline parameters, and spline location parameters are represented by extended collinearity and arc-length parameterization equations. Five observation equations are produced by each two points, and these are four extended collinearity equations (21) and one arc-length parameterization equation (24). An integrated model provides not only for the recovery of the image orientation parameters but also enables surface reconstruction using 3D curves. Of course, as the equation system of the integrated model has seven datum defects, control information about the coordinate system is required to obtain parameters. This is a step toward higher level vision tasks such as object recognition and surface reconstruction. In the case of straight lines and conic sections, tangents are additional observations in the integrated model. Conic sections, like points, provide good mathematical constraints because such sections provide nonsingular second degree equations. Such equations provide information for reconstruction and transformation and conic sections are divided by the eccentricity e. Because such sections can adopt more constraints than points and straight line features, they are useful for close range photogrammetric applications. In addition, conic sections have strength in correspondence establishment between 3D sections in object space and their counterpart features in 2D projected image space.
Ji et al. [59] employed conic sections for the recovery of EOPs, and Heikkila used them for camera calibration. A Hough transformation reduces the time complexity of conic section extraction using five parameter spaces for a SPR, camera calibration, and triangulation.
Parameters are linearized in the previous sections and the Gauss–Markov model is employed for the unknown parameter estimation. The equation system of the integrated model is described as:
[ A EOP k A S P i A t i A A L i ] [ ξ EOP k ξ S P i ξ i i ] = [ y k i ] A EOP k = [ M i 1 k 1 M i 2 k 1 M i 6 k 1 M i 1 k m M i 2 k m M i 6 k m N i 1 k 1 N i 2 k 1 N i 6 k 1 N i 1 k m N i 2 k m N i 6 k m ] A S P i = [ M i 7 1 M i 8 1 M i 18 1 M i 7 m M i 8 m M i 18 m N i 7 1 N i 8 1 N i 18 1 N i 7 m N i 8 m N i 18 m ] A t i = [ M i 19 1 M i 20 1 M i 18 + n 1 M i 19 m M i 20 m M i 18 + n m N i 19 m N i 20 m N i 18 + n m N i 19 m N i 20 m N i 18 + n m ] A A L k i [ A i 1 k 1 A i 2 k 1 A i 20 k 1 A i 1 k m A i 1 k m A i 20 k m ] ξ EOP k = [ d X C k d Y C k d Z C k d ω k d φ k d κ k ] T ξ S P i = [ d a i 0 d a i 1 d a i 2 d a i 3 d b i 0 d b i 1 d b i 2 d b i 3 d c i 0 d c i 1 d c i 2 d c i 3 ] T ξ t i = [ d t i 1 d t i 2 d t i n ] T y k i = [ x p k i + f u 0 w 0 y p k i + f u 0 w 0 Arc ( t ) p k i Arc ( t ) 0 ] T
with, Arc ( t ) 0 = t 2 0 t 1 0 6 [ f 0 ( t 1 0 ) + 4 f 0 ( t 2 0 + t 1 0 2 ) + f 0 ( t 2 0 ), m as the number of images, n the number of points on a spline segment, k the kth image, and i the ith spline segment. Because the equation system of the integrated model has seven datum defects, the control information for the coordinate system is required to obtain seven transformation parameters. In a general photogrammetric network, the rank deficiency referred to as datum defects is seven. Estimates of the unknown parameters are obtained by the least squares solution, which minimizes the sum of squared deviations. A nonlinear least squares system is required in a conventional nonlinear photogrammetric solution to obtain orientation parameters. Many observations in photogrammetry are random variables that are considered as different values in the case of repeated observations such as the image coordinates of points. Each measured observation represents a random variable estimate. If image point coordinates are measured using a digital photogrammetric workstation, the values are measured slightly differently. The integrated and linearized Gauss–Markov model and the least squares estimated parameter vector with its dispersion matrix are:
y k i = A I M ξ I M + e A I M = [ A EOP k A S P i A t i A A L k i ] ξ I M = [ ξ EOP k ξ S P i ξ t i ] T ξ ^ I M = ( A I M T P A I M ) 1 A I M T P y k i D ( ξ ^ I M ) = σ 0 2 ( A I M T P A I M ) 1
with e N ( 0 , σ 0 2 P 1 ) being the error vector with zero mean and cofactor matrix P−1, a variance component σ 0 2, which can be known or not, ξ̂IM is the least squares estimated parameter vector, and D(ξ̂IM) is the dispersion matrix.
If one or more of the three estimated parameter sets ξ EOP k , ξ S P i , ξ t i are considered as stochastic constraints, the reduction of the normal equation matrix can be applied. Control information is implemented as stochastic constraints in a bundle block adjustment. The distribution and quality of control features depend on the number and the density of control features, the number of tie features, and the degree of overlap of the tie features. If adding stochastic constraints removes the rank deficiency of the Gauss–Markov model, bundle adjustment can be implemented employing only the extended collinearity equations for the 3D natural cubic splines. Fixed exterior orientation parameters, control splines, or control spline location parameters can be stochastic constraints.

3.5. Evaluation of Bundle Block Adjustment

Bundle block adjustment must be followed by an evaluation postadjustment analysis to check the suitability of project specifications and requirements. Iteratively reweighed least squares and least median of squares are the appropriate implementation of a statistical evaluation that removes poor observations. The important element affecting bundle block adjustment is the geometry of aerial images. Generally, the previous flight plan is adopted to obtain suitable results. A simulation bundle block adjustment is implemented before employing a flight plan within the new project design because such a simulation can reduce the effect of error measurements.
A qualitative evaluation that allows the operator to recognize the adjustment characteristics is often used after bundle block adjustment. The sizes of the residuals in images are drawn for the evaluation. The image residuals can be points or long lines and if all image residuals have the same orientation, then the image has a systematic error such as atmospheric refraction or an orientation parameter error. In addition, a lack of flatness in the focal plane may cause systematic errors in the image space, which affects the accuracy of a bundle block adjustment. Distortions are different from one location to another in the entire image space. The topographic measurement of the focal plane can correct the lack of focal plane flatness. Image coordinate errors are correlated in the case of systematic image errors. A poor measurement can result in an indicated opposite residual direction or an exaggerated residual.
The three main elements in the statistical evaluation of bundle block adjustments are precision, accuracy, and reliability. Precision is calculated employing parameter variances and covariances, because a small variance indicates that the estimated values have a small range and a large variance means that the estimated values are not calculated properly. The range of the parameter variance is from zero, in the case of error free parameters, to infinity, in the case of completely unknown parameters. A dispersion matrix may contain diagonal elements that are parameter variances. These and any off-diagonal elements are covariances between two parameters. Accuracy can be verified using check points that are not contained in bundle block adjustment like control points. Reliability can be confirmed from other redundant observations. The extended collinearity equations are a mathematical model for bundle block adjustment. The mathematical model consists of both functional and stochastic models. The functional one represents the geometrical properties and the stochastic one describes the statistical properties. Repeated measurements at the same location in the image space are represented with respect to the functional model and the redundant observations of image locations in the image space are expressed with respect to the stochastic model. While the Gauss–Markov model uses indirect observations, condition equations such as coordinate transformations and the coplanarity condition can be employed in the adjustment.
The Gauss–Markov model and the condition equation can be combined into the Gauss–Helmert model. In addition, functional constraints such as points having the same height or straight railroad segments can be added into the block adjustment.
The difference between condition and constraint equations is that condition equations consist of observations and parameters, and constraint equations consist of only parameters. With the advance of technology, the photogrammetrical input data has increased so adequate formulation of adjustment is required. All the variables are involved in the mathematical equations and the weight matrix of the variables changes from zero to infinity depending upon the variances. Variables with near to zero weight are considered as unknown parameters and variables with close to infinite weight are considered as constants. Most actual observations exist between the two boundary cases. Assessment by postadjustment analysis is important in photogrammetry to evaluate the results. One of the assessment methods is to compare the estimated variance with the two-tailed confidence interval based on the normal distribution. The two-tailed confidence interval is computed by a reference variance σ 0 2 with χ2 distribution as:
r σ ^ 0 2 χ r , α / 2 2 < σ 0 2 < r σ ^ 0 2 χ r , 1 α / 2 2
where r is degrees of freedom and α is a confidence coefficient (or a confidence level). If σ 0 2 has a value outside of the interval, we can assume that the mathematical model of adjustment is incorrect through the wrong formulation or linearization, blunders, or systematic errors.

3.6. Pose Estimation with an ICP Algorithm

In the previous spline segment case, the correspondence between spline segments in the image and the object space was assumed. In the present consideration, it is not known which image points belong to which spline segment. The ICP algorithm can be utilized for the recovery of EOPs because the initial estimated parameters of the relative pose can be obtained from the orientation data for general photogrammetric tasks. The original ICP algorithm steps are as follows. The closest point operators search the associate point using the nearest neighboring algorithm and then the transformation parameters are estimated using a mean square cost function. The point is transformed by the estimated parameters and this step is iteratively established towards convergence into a local minimum of the mean square distance. The transformation, which includes translation and rotation between two clouds of points, is estimated iteratively towards convergence into a global minimum. In other words, the iterative calculation of the mean square errors is terminated when a local minimum falls below a predefined threshold. A small global minimum or a fluctuated curve requires more memory-intensive and time-consuming computation. In every iteration step, a local minimum is calculated with varying transformation parameters, but convergence into a global minimum with the correct transformation parameters is not always the result.
By the definition of a natural cubic spline, each parametric equation of a spline segment (Si(t)) can be expressed as:
S i ( t ) = [ X i ( t ) Y i ( t ) Z i ( t ) ] = [ a i 0 + a i 1 t + a i 2 t 2 + a i 3 t 3 b i 0 + b i 1 t + b i 2 t 2 + b i 3 t 3 c i 0 + c i 1 t + c i 2 t 2 + c i 3 t 3 ] , t [ 0 , 1 ]
with Xi(t), Yi(t), Zi(t) as the object space coordinates and ai, bi, ci as the coefficients of the ith spline segment.
The ray from the perspective center (XC,YC,ZC) to the image point (xp,yp,–f) is:
Ξ ( l ) = [ X ( l ) Y ( l ) Z ( l ) ] = [ X C k Y C k Z C k ] + [ d 1 d 2 d 3 ] l
where:
[ d 1 d 2 d 3 ] = R T ( ω k , φ k , κ k ) [ x p y p f ]
with X C k , Y C k , Z C k , ω k , φ k , κ k EOPs at the kth iteration.
A point on the ray searches the closest to a natural cubic spline by minimizing the following target function for every spline segment. Transformation parameters related to an image point and its closest spline segment can be established using the least squares method:
Φ ( l , t ) Ξ ( l ) S i ( t ) 2 = stationary l , t
The global minimum of Φ(l, t) can be calculated by ∇Φ(l, t) = 0 or ∂Φ/∂l = ∂Φ/∂t = 0. Substituting (28) and (29) into (31) and taking the derivatives with respect to l and t leads to:
1 2 Φ l = ( X C + d 1 l a i 0 a i 1 t a i 2 t 2 a i 3 t 3 ) d 1 + ( Y C + d 2 l b i 0 b i 1 t b i 2 t 2 b i 3 t 3 ) d 2 + ( Z C + d 3 l c i 0 c i 1 t c i 2 t 2 c i 3 t 3 ) d 3 = 0 1 2 Φ l = ( X C + d 1 l a i 0 a i 1 t a i 2 t 2 a i 3 t 3 ) ( a i 1 2 a i 2 t 3 a i 3 t 2 ) + ( Y C + d 2 l b i 0 b i 1 t b i 2 t 2 b i 3 t 3 ) ( b i 1 2 b i 2 t 3 b i 3 t 2 ) + ( Z C + d 3 l c i 0 c i 1 t c i 2 t 2 c i 3 t 3 ) ( c i 1 2 c i 2 t 3 c i 3 t 2 ) = 0
Convergence into a global minimum does not exist because equation (32) is not a linear system in l and t. The relationship between an image space point and its corresponding spline segment cannot be established with the minimization method.

4. Experiments and Results

This section demonstrates the feasibility and the performance of the proposed model for the acquisition of spline parameters, spline location parameters, and image orientation parameters based on control and tie splines in the object space within the simulated and real data sets. In general photogrammetric tasks, the correspondence between image edge features must be established either automatically or manually, but in this study correspondence between image edge features is not required. In a series of six experiments with the synthetic data set, the first test recovers spline parameters and spline location parameters in an error free EOPs case. The second test recovers the partial spline parameters related to the spline shape. The third procedure estimates the spline location parameters with error free EOPs. The fourth step calculates EOPs and spline location parameters, followed by the fifth step that estimates EOPs with full controlled splines in which the parametric curves used as control features are assumed to be error free. In the last experiment, EOPs and tie spline parameters are obtained using the control spline.
Object space knowledge concerning splines, their relationships, and the orientation information of images can be considered as control information. Spline parameters in a partial control spline or orientation parameters can be considered as stochastic constraints in the integrated adjustment model. The starting point of a spline is considered to be a known parameter in the partial control spline in which a0,b0, and c0 of the X, Y, and Z coordinates of a spline are known. The number of unknowns is displayed in Table 1 and Figure 3, where n is the number of points in the object space, t shows the number of spline location parameters, and m represents the number of overlapped images in the target area.
Four points on a spline segment in one image are the only independent observations so additional points on the same segment do not provide nonredundant information to reduce the overall deficiency of the EOP and spline parameter recovery. To verify the information content of an image spline, we demonstrate that any five points on a spline segment generate a dependent set of extended collinearity equations. Any combination of four points yielding eight collinearity equations are independent observations, but five points bearing 10 collinearity equations produce a dependent set of observations related to the correspondence between a natural cubic spline in the image and the object space. More than four point observations on an image spline segment increase the redundancy related to the accuracy but do not decrease the overall rank deficiency of the proposed adjustment system. In the same fashion, the case using a polynomial of degree 2 can be implemented. Three points on a quadratic polynomial curve in one image are the only independent sets, so additional points on the same curve segment are a dependent observation. More than the independent point observations on a polynomial increase the redundancy related to the accuracy, but they do not provide nonredundant information.
The amount of information carried by a natural cubic spline can be calculated with the redundancy budget. Every spline segment has 12 parameters and every point measured on a spline segment contributes one additional parameter. Let n be the number of points measured on one spline segment in the image space and m be the number of images that contain a tie spline. 2nm collinearity equations and m (n − 1), the arc-length parameterizations, are equations and 12 (the number of one spline segment parameters) + nm (the number of spline location parameters) are unknowns. The redundancy is 2nmm − 12 for one spline segment, so that if two images (m = 2) are used for bundle block adjustment, the redundancy is 4n − 14. Four points are required to determine spline and spline location parameters, in which case one spline segment and one degree of freedom to the overall redundancy budget is solved by each point measurement with the extended collinearity equation. Arc-length parameterization also contributes one degree of freedom to the overall redundancy budget. The fifth point does not provide additional information to reduce the overall deficiency but only strengthens the spline parameters. This means it increases the overall precision of the estimated parameters.
This fact shows the advantage of adopting splines in which the number of degrees of freedom is four because in straight tie lines only two points per line are independent. Independent information, the number of degrees of freedom of a straight line, is two from two points or a point with its tangent direction. A redundancy is r = 2m − 4 with a line expression of four parameters because there are 2 nm collinearity equations and the unknowns are 4 + nm [49]. Only two points (n = 2) are available to determine four line parameters with two images (m = 2) so at least three images must contain a tie line. The information content of t tie lines on m images is t (2m − 4). One straight line adds two degrees of freedom to the redundancy budget and at least three lines are required in the space resection. An additional point on a straight line does not provide additional information to reduce the rank deficiency of the recovery of EOPs but only contributes image line coefficients. If spline location parameters or spline parameters enter the integrated adjustment model through stochastic constraints, employing extended collinearity equations is enough to solve the system without the arc-length parameterization.
The redundancy budget of a tie point is r = 2m − 3 so tie points provide one more independent equation than the tie lines. However, using tie points requires a semiautomatic matching procedure to identify the tie points on all the images, and using linear features provides a more reliable and accurate basis for object recognition, pose determination, and other higher photogrammetric activities than using point features.

4.1. Synthetic Data Description

To evaluate the new bundle block adjustment model using natural cubic splines, an analysis of the sensitivity and robustness of the model is required. The model suitability can be verified by using the estimated parameters with a dispersion matrix that includes standard deviations and correlations. The accuracy of bundle block adjustment is determined by the geometry of a complete block of images and the quality of the position and attitude information of a camera. A novel approach is a simulation of the bundle block adjustment. This is required prior to an actual experiment with real data in order to evaluate the performance of the proposed algorithms. Such a simulation can control the measurement errors to minimize random noise affecting the overall geometry of a block. Individual observations are generated based on the general situation of bundle block adjustment in order to estimate the properties of the proposed algorithms. A simulation allows adjustment for geometric problems or conditions with various experiments. A spline is derived via three ground control points (3232, 4261, 18), (3335, 4343, 52), and (3373, 4387, 34). Several factors that affect the estimates of exterior orientation parameters, spline parameters, and spline location parameters are discerned using the proposed bundle block adjustment model together with both the simulated image and the real image blocks.

4.2. Experiments with Error Free EOPs

Spline parameters and spline location parameters are dependent upon various controls, and the unknowns can be obtained by a combined model of extended collinearity equations and the arc-length parameterization equations of splines. Splines in the object space are considered as tie lines in the same fashion as tie points in a conventional bundle block adjustment. Data on the exterior orientation parameters is considered as control information in this experiment. A well-known fact in employing the least squares system is that good initial estimates of true values make the system swiftly convergent towards the correct solution.
Normally distributed random noise is added to points in the image space coordinate system in all the experiments. This has a zero mean and σ = ±5μm standard deviation. Generally, the larger the noise level the more accurate are the approximations required to achieve the ultimate convergence of the results. A worst case scenario for estimation is that the large noise level causes the proposed model not to converge towards the specific estimates because the convergence radius is then proportional to the noise level. The parameter estimation is sensitive to the noise of the image measurement. Error propagation related to the noise in image space observation is one of the most important elements in the estimation theory. The proposed bundle block adjustment can be evaluated statistically using the variances and the covariances of parameters because a small variance indicates that the estimated values have a small range and a large variance means that the estimates are not properly calculated. The range of parameter variance is from zero in the case of error free parameters to infinity with completely unknown parameters. The result of one spline segment is expressed in Table 3 with ξ0 as the initial values and ξ̂ as the estimates. The estimated spline and spline location parameters along with their standard deviations are established without the knowledge of the point-to-point correspondence.
If no random noise is added to image points, the estimates converge to the true values. The quality of initial estimates is important in the least squares system because it determines the iteration number of the system and the accuracy of the convergence. The assumption is that two points on one spline segment are measured in each image so the total number of equations is 2 × 6 (the number of images) × 2 (the number of points) + 6 (the number of the arc length), and the total number of unknowns is 12 (the number of spline parameters) + 12 (the number of spline location parameters). The redundancy (=the number of equations − the number of parameters), that is, the degrees of freedom, is six. While some of the geometric constraints such as slope and distance observations are dependent on the extended collinearity equations using splines, other constraints such as slope and arc length increase the nonredundant information in the adjustment to reduce the overall rank deficiency of the system.
The coplanarity approach is another mathematical model of the perspective relationship between the image and the object space features. The projection plane defined by the perspective center in the image space and the plane including the straight line in the object space are identical. Because the coplanarity condition is only for straight lines, the coplanarity approach cannot be extended to curves. Object space knowledge about the starting point of a spline can be employed in bundle block adjustment. Because the control information about a starting point is available for only three parameters of a total of 12 unknown parameters to a spline, a spline with control information about a starting point is called a partial control spline. Three spline parameters related to the starting point of a spline are set to stochastic constraints and the result is seen in Table 4. The total number of equations is 2 × 6 (the number of images) × 2 (the number of points) + 6 (the number of the arc length) = 30, and the total number of unknowns is 9 (the number of partial spline parameters) + 12 (the number of spline location parameters) = 21 so the redundancy is nine. A convergence of partial spline and spline location parameters has been archived with a partial control spline.
In the next experiment, spline location parameters are estimated with known EOPs and a full control spline. Because spline parameters and spline location parameters are dependent upon other parameters, the unknowns can be obtained from the model of an observation equation with stochastic constraints. In this experiment, spline parameters are set to stochastic constraints and the result is seen in Table 5.
The total number of equations is 2 × 6 (the number of images) × 3 (the number of points) = 36, and the total number of unknowns is 18 (the number of spline location parameters) so the redundancy is 18. Because spline location parameters are independent of each other, the arc-length parameterization is not required. The result indicates that a convergence of spline location parameters has been achieved with fixed spline parameters considered as stochastic constraints. The proposed model is robust with respect to the initial approximations of spline parameters. The uncertain information related to the representation of a natural cubic spline is described in the dispersion matrix.

4.3. Recovery of EOPs and Spline Parameters

The object space knowledge of splines is available to recover the exterior orientation parameters in a bundle block adjustment. Control spline and partial control spline approaches are applied to verify the feasibility of using control information with splines. In both cases, equations of the arc-length parameterization are not necessary if we have enough equations to solve the system because spline parameters are independent of each other. In the experiment for a full control spline, the total number of equations is 2 × 6 (the number of images) × 4 (the number of points) + 3 (the number of arc lengths) × 6 (the number of images) = 66, and the total number of unknowns is 36 (the number of EOPs) + 24 (the number of spline location parameters) = 60. The redundancy is six. In the case of the partial control spline with one spline segment, the total number of equations is 2 × 6 (the number of images) × 4 (the number of points) + 3 (the number of arc lengths) × 6 (the number of images) = 66, and the total number of unknowns is 36 (the number of EOPs) + 9 (the number of partial spline parameters) + 24 (the number of spline location parameters) = 69. Thus, one more segment is required to solve the underdetermined system. The total number of equations using two spline segments is 2 × 6 (the number of images) × 4 (the number of points) × 2 (the number of spline segments) + 3 (the number of arc lengths) × 6 (the number of images) × 2 (the number of spline segments) = 132, and the total number of unknowns is 36 (the number of EOPs) + 9 (the number of partial spline parameters) × 2 (the number of spline segments) + 24 (the number of spline location parameters) × 2 (the number of spline segments) = 102. The redundancy is 30. A convergence of the EOPs of an image block and the spline parameters has been achieved in both experiments.
Table 6 expresses the convergence achievement of EOPs and spline location parameters. The correlation coefficient between parameter XC and φ is high (ρ ≈ 1) in the dispersion matrix, that is, two parameters are highly correlated among the EOPs. The correlation coefficient between parameters YC and ω is approximately 0.85. In general, the correlation coefficient between parameters XC and φ is higher than between parameters YC and ω.
Because a control spline provides the object space information about the coordinate system having datum defects of seven, tie spline parameters and EOPs can be recovered simultaneously. In the experiment of combined splines, the total number of equations is 2 × 6 (the number of images) × 3 (the number of points) × 2 (the number of splines) + 12 (the number of arc lengths) × 2 (the number of splines) = 96, and the total number of unknowns is 36 (the number of EOPs) + 12 (the number of tie spline parameters) + 18 (the number of tie spline location parameters) + 18 (the number of control spline location parameters) = 84.
Knowledge of object space information about a spline referred to as a full control spline is available prior to aerial triangulation. A control spline is considered to be a stochastic constraint in the proposed adjustment model and the representation of a control spline is the same as that of a tie spline. The result for combined splines that demonstrates the feasibility of using tie splines and control splines for bundle block adjustment is illustrated in Table 7.
Iteration with an incorrect spline segment in which a spline in the image space does not lie on the projection of a 3D spline in the object space results in a divergence of the system. A control spline is taken to be error free, but in reality this assumption is not correct. The accuracy of control splines is propagated into the proposed bundle block adjustment algorithm, but initial data such as a GIS database, maps, or orthophotos cannot be without error.

4.4. Tests with Real Data

In this section, actual experiments with real data are undertaken to verify the feasibility of the proposed bundle block adjustment algorithm using splines for the recovery of EOPs and spline parameters. Medium scale aerial images covering the area of Jakobshavn Isbrae in West Greenland are employed for this study. The aerial photographs were obtained by Kort and Matrikelstyrelsen (KMS: Danish National Survey and Cadastre) in 1985. KMS established aerial triangulation using GPS ground control points with a ±1 pixel root mean square error under favorable circumstances and images were oriented to the WGS84 reference frame. Technical information on the aerial images is described in Table 8.
The diapositive films were scanned with a RasterMaster photogrammetric precision scanner, which has a maximum image resolution of 12 μm and a scan dimension of 23 cm × 23 cm to obtain digital images for a softcopy workstation as seen in Figure 6.
The first experiment is the recovery of spline parameters with known EOPs obtained by manual operation using a softcopy workstation. A spline consists of four parts and the second segment parameters are recovered. The total number of equations is 2 × 3 (the number of images) × 3 (the number of points) + 2 (the number of arc lengths) × 3 (the number of images) = 24, and the total number of unknowns is 12 (the number of spline parameters) + 9 (the number of spline location parameters) = 21 so the redundancy is three. Table 9 shows the convergence achievement of spline and spline location parameters.
Estimation of spline parameters including their location parameters is established by the relationship between splines in the object space and their projection in the image space without the knowledge of the point-to-point correspondence. Because bundle block adjustment using splines does not require conjugate points generated by point-to-point correspondence knowledge, a more robust and flexible matching algorithm can be adopted. Table 10 shows the available object space information without knowledge of the point-to-point correspondence (the full control spline). All locations are assumed as lying on the second spline segment and the second spline segment as calculated from the softcopy workstation is used as control information.
The next experiment is the recovery of EOPs with a control spline. The spline control points are (534415.91, 767199305, −18.97), (535394.52, 7672045.02, 2.127), (536110.66, 7672024.29, −13.897), and (536654.04, 7671016.20, −2.51). Even though edge detectors are often used in digital photogrammetry and remote sensing software, the control points are extracted manually because edge detection is not our main goal. Among the three segments, the second spline segment is used for the EOP recovery. The information of the control spline is obtained by a manual operation using the softcopy workstation with an estimated accuracy of ±1 pixel. The convergence radius of the proposed iterative algorithm is proportional to the estimated accuracy level. The image coordinate system is converted into the photo coordinate system using the interior orientation parameters from KMS. The association between a point on a 3D spline segment and a point on a 2D image is not established in this study. Of course, 3D spline measurement in the stereo model using the softcopy workstation cannot be without error so the accuracy of the control spline is propagated into the recovery of EOPs. The result is illustrated in Table 11. The spline control information is utilized as stochastic constraints in the adjustment model. Because adding these constraints removes the rank deficiency of the Gauss–Markov model corresponding to spline parameters that are dependent upon spline location parameters, a bundle block adjustment can be made using only the extended collinearity equations for natural cubic splines.

5. Conclusions

In this paper, traditional least squares of a bundle block adjustment process have been augmented by support splines instead of conventional point features. Estimation of EOPs and spline parameters including location parameters is established by the relationship between splines in the object space and their projection into the image space without any knowledge of the point-to-point correspondence. Because bundle block adjustment using splines does not require conjugate points generated by the point-to-point correspondence knowledge, a more reliable and flexible matching algorithm can be adopted. Point-based aerial triangulation with experienced human operators is effective for traditional photogrammetric activities but is not appropriate within the autonomous environment of digital photogrammetry. Feature-based aerial triangulation is suitable for the development of reliable and accurate automation techniques. If linear features are employed as control features, they provide advantages over point features in aerial triangulation automation. Point-based aerial triangulation based on manual measurement and the identification of conjugate points is less reliable than feature-based aerial triangulation because it has the limitations of visibility (occlusion), ambiguity (repetitive patterns), and semantic information in the light of robust and appropriate automation. Automation of aerial triangulation and pose estimation is obstructed by the correspondence problem, but the employment of splines is one way to overcome occlusion and ambiguity issues. The manual identification of corresponding entities in two images is crucial in the automation of photogrammetric tasks. A further problem of point-based approaches is their weak geometric constraints as compared with feature-based methods, so accurate initial values for the unknown parameters are required. Feature-based aerial triangulation can be implemented without conjugate points because the measured points in each image are not the conjugate points in this proposed adjustment model. Thus, tie splines that do not appear in all the overlapped images together can be employed in feature-based aerial triangulation. Another advantage of employing splines is that the adoption of high level features increases the feasibility of geometric information and provides an appropriate analytical solution that emphasizes the redundancy of aerial triangulation.
3D linear features expressed by 3D natural cubic splines are employed as the mathematical model of linear features in the object space and its counterpart in the projected image space for bundle block adjustment. To solve overparameterization of 3D natural cubic splines, arc-length parameterization using Simpson's rule is developed, and in the case of straight lines and conic sections, spline tangents can be additional equations to the overparameterized system. Photogrammetric triangulation by the proposed model, including the extended collinearity and arc-length parameterization equations, is developed to show the feasibility of tie and control splines for the estimation of the exterior orientation of multiple images, splines, and spline location parameters. A useful stochastic constraint for a spline segment is examined for its utility to become a full or partial control spline such as known EOPs with a tie, partial control, and full control spline, and unknown EOPs with a partial and full control spline. In addition, the information content of an image spline is calculated and the feasibility of a tie spline and a control spline for a block adjustment is described. A simulation bundle block adjustment is implemented prior to the actual experiment with real data in order to evaluate the performance of the proposed algorithms. A simulation can control the measurement errors so that random noises minimally affect the overall geometry of a block. The individual observations are generated based on the general situation of bundle block adjustment to estimate the properties of the proposed algorithms. A simulation allows adjustment for geometric problems or varying conditions within individual experiments.

Acknowledgments

This research was supported by a grant(07KLSGC04) from Cutting-edge Urban Development—Korean Land Spatialization Research Project funded by Ministry of Construction & Transportation of Korean government.

References and Notes

  1. Schenk, T. Digital Photogrammetry; Terra Science: Laurelville, OH, USA, 1999; Volume 1. [Google Scholar]
  2. Canny, J. A Computational Approach to Edge Detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, 8, 679–698. [Google Scholar]
  3. Förstner, W.; Gülch, E. A Fast Operator for Detection and Precise Location of Distinct Points, Corners and Centres of Circular Features. ISPRS Intercommission Workshop, Interlaken, Switzerland, Interlaken, Switzerland; 1987; pp. 281–305. [Google Scholar]
  4. Harric, C.; Stephens, M. A Combined Corner and Edge Detector. Proceedings of 4th Alvey Vision Conference, Cambridge, UK; 1988; pp. 147–151. [Google Scholar]
  5. Moravec, H. Towards Automatic Visual Obstacle Avoidance. Proceedings of the International Joint Conference on Artificial Intelligence, Cambridge, MA, USA; 1977; p. 584. [Google Scholar]
  6. Prewitt, J. Object Enhancement and Extraction in Picture Processing and Psychopictorics; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  7. Sobel, I. Camera Models and on Calibrating Computer Controlled Cameras for Perceiving 3D Scenes. Artif. Intell. 1974, 5, 185–198. [Google Scholar]
  8. Smith, S.; Brady, J. SUSAN—A New Approach to Low Level Image Processing.; Defence Research Agency: Farnborough, UK, 1995; Tech. Rep. TR95SMS1c. [Google Scholar]
  9. Marr, D. Vision; Freeman Publishers: New York, NY, USA, 1982. [Google Scholar]
  10. Förstner, W.; Gülch, E. A Feature Based Correspondence Algorithm for Image Matching. Int. Arch. Photogramm. Remote Sens. 1986, 26, 13–19. [Google Scholar]
  11. Hannah, M. A System for Digital Stereo Image Matching. Photogramm. Eng. Remote Sens. 1989, 55, 1765–1770. [Google Scholar]
  12. Schenk, T.; Toth, C.; Li, J. Towards an Autonomous System for Orienting Digital Stereopairs. Photogramm. Eng. Remote Sens. 1991, 57, 1057–1064. [Google Scholar]
  13. Schenk, T.; Toth, C. Towards a Fully Automated Aero-triangulation System. ACSM/ASPRS, Annual Convention Technical Papers, Minneapolis, MN, USA; 1993; 3, pp. 340–347. [Google Scholar]
  14. Ebner, H.; Ohlhof, T. Utilization of Ground Control Points for Image Orientation without Point Identification in Image Space. Int. Arch. Photogramm. Remote Sens. 1994, 30, 206–211. [Google Scholar]
  15. Ackerman, F.; Tsingas, V. Automatic Digital Aerial Triangulation. Proceedings of the ACSM/ASPRS Annual Convention, Reno, NV, USA; 1994; pp. 1–12. [Google Scholar]
  16. Haala, N.; Vosselman, G. Recognition of Road and River Patterns by Relational Matching. Int. Arch. Photogramm. Remote Sens. 1992, 29, 969–975. [Google Scholar]
  17. Drewniok, C.; Rohr, K. Automatic Exterior Orientation of Aerial Images in Urban Environments. Int. Arch. Photogramm. Remote Sens. 1996, 31, 146–152. [Google Scholar]
  18. Drewniok, C.; Rohr, K. Exterior Orientation—An Automatic Approach Based on Fitting Analytic Landmark Models. ISPRS J. Photogramm. Remote Sens. 1997, 52, 132–145. [Google Scholar]
  19. Zahran, M. Shape-based Multi-image Matching using the Principles of Generalized Hough Transform. Ph.D. Dissertation, Department of Civil and Environmental Engineering and Geodetic Science, Ohio State University, Columbus, OH, USA, 1997. [Google Scholar]
  20. Schenk, T. Determining Transformation Parameters between Surfaces without Identical Points; Department of Civil and Environment Engineering and Geodetic Science, Ohio State University: Columbus, OH, USA, 1998; Tech. Rep. No. 15. [Google Scholar]
  21. Sarkar, S.; Boyer, K.L. Computing Perceptual Organization in Computer Vision; World Scientific: River Edge, NJ, USA, 1994. [Google Scholar]
  22. Guy, G.; Medioni, G. Inferring Global Perceptual Contours from Local Features. Int. J. Comput. Vis. 1996, 20, 113–133. [Google Scholar]
  23. Casadei, S.; Mitter, S. Hierarchical Image Segmentation—Part 1: Detection of Regular Curves in a Vector Graph. Int. J. Comput. Vis. 1998, 27, 71–100. [Google Scholar]
  24. Boyer, K.L.; Sarkar, S. Perceptual Organization in Computer Vision: Status, Challenges and Potential. Comput. Vis. Image Underst. 1999, 76, 1–5. [Google Scholar]
  25. Masry, S.E. Digital Mapping using Entities: A New Concept. Photogramm. Eng. Remote Sens. 1981, 48, 1561–1565. [Google Scholar]
  26. Heikkila, J. Use of Linear Features in Digital Photogrammetry. Photogramm. J. Finl. 1991, 12, 40–56. [Google Scholar]
  27. Kubik, K. Photogrammetric Restitution Based on Linear Features. Int. Arch. Photogramm. Remote Sens. 1992, 29, 687–690. [Google Scholar]
  28. Petsa, E.; Patias, P. Formulation and Assessment of Straight Line Based Algorithms for Digital Photogrammetry. In Int. Arch. Photogramm. Remote Sens.; Melbourne, Australia, 1994; Volume 30, pp. 310–317. [Google Scholar]
  29. Gülch, E. Line Photogrammetry: A Tool for Precise Localization of 3D Points and Lines in Automated Object Reconstruction. Proceedings of Integrating Photogrammetric Techniques with Scene Analysis and Machine Vision II SPIE, Orlando, FL, USA; 1995; pp. 2–12. [Google Scholar]
  30. Wiman, H.; Axelsson, P. Finding 3D-structures in Multiple Aerial Images using Lines and Regions. In Int. Arch. Photogramm. Remote Sens.; Vienna, Austria, 1996; Volume 31, pp. 953–959. [Google Scholar]
  31. Chen, T.; Shibasaki, R.S. Determination of Camera's Orientation Parameters Based on Line Features. Int. Arch. Photogramm. Remote Sens. 1998, 32, 23–28. [Google Scholar]
  32. Habib, A.F. Aerial Triangulation using Point and Linear Features. Proceedings of the ISPRS Conference on Automatic Extraction of GIS Objects from Digital Imagery, Munich, Germany; 1999; pp. 137–142. [Google Scholar]
  33. Heuvel, F. A Line-photogrammetric Mathematical Model for the Reconstruction of Polyhedral Objects. Videometrics VI, Proceedings of SPIE, Denver, CO, USA; 1999; 3641, pp. 60–71. [Google Scholar]
  34. Tommaselli, A.; Poz, A. Line Based Orientation of Aerial Images, Proceedings of the ISPRS Conference on Automatic Extraction of GIS Objects from Digital Imagery, Munich, Germany; 1999; pp. 143–148.
  35. Vosselman, G.; Veldhuis, H. Mapping by Dragging and Fitting of Wire-frame Models. Photogramm. Eng. Remote Sens. 1999, 65, 769–776. [Google Scholar]
  36. Förstner, W. New Orientation Procedures. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2000, 33, 297–304. [Google Scholar]
  37. Smith, M.J.; Park, D.W.G. Absolute and Exterior Orientation using Linear Features. Int. Arch. Photogramm. Remote Sens. 2000, 33, 850–857. [Google Scholar]
  38. Schenk, T. Towards a Feature-based Photogrammetry. Swed. Soc. Photogramm. Remote Sens. 2002, 1, 143–150. [Google Scholar]
  39. Tangelder, J.; Ermes, P.; Vosselman, G.; Heuvel, F. CAD-based Photogrammetry for Reverse Engineering of Industrial Installation. Comput.-Aided Civ. Infrastruc. Eng. 2003, 18, 264–274. [Google Scholar]
  40. Parian, J.A.; Gruen, A. Panoramic Camera Calibration using 3D Straight Lines. Proceedings of International Archives of. Photogrammetry, Remote Sensing and Spatial Information Sciences, Berlin, Germany; 2005. [Google Scholar]
  41. Mulawa, D.C.; Mikhail, E.M. Photogrammetric Treatment of Linear Features. Int. Arch. Photogramm. Remote Sens. 1988, 27(B10), 383–393. [Google Scholar]
  42. Mulawa, D.C. Estimation and Photogrammetric Treatment of Linear Features. Ph.D. Dissertation, Department of Civil Engineering, Purdue University, West Lafayette, IN, USA, 1989. [Google Scholar]
  43. Mikhail, E.M.; Weerawong, K. Feature-based Photogrammetric Object Construction. Proceedings of the ASPRS/ACSM Annual Convention, Reno, NV, USA; 1994; pp. 403–407. [Google Scholar]
  44. Tommaselli, A.; Tozzi, C. A Filtering-based Approach to Eye-in-hand Robot Vision. Proceedings of International Archives of Photogrammetry and Remote Sensing, Washington, DC, USA; 1992; pp. 182–189. [Google Scholar]
  45. Habib, A.F.; Shin, S.W.; Morgan, M. Automatic Pose Estimation of Imagery using Free-form Control Linear Features. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2002b, 34, 150–155. [Google Scholar]
  46. Habib, A.; Morgan, M.; Kim, E.M.; Cheng, R. Linear Features in Photogrammetric Activities. XXth ISPRS Congress: Istanbul, Turkey, 2004; p. 610. [Google Scholar]
  47. Mikhail, E.M. Linear Features for Photogrammetric Restitution and Object Completion. SPIE Proceedings-Integrating Photogrammetric Techniques with Scene Analysis and Machine Vision, Orlando, FL, USA; 1993. [Google Scholar]
  48. Habib, A.F.; Asmamaw, A.; Kelley, D.; May, M. Linear Features in Photogrammetry; Department of Civil and Environmental Engineering and Geodetic Science, Ohio State University: Columbus, OH, USA, 2000. [Google Scholar]
  49. Schenk, T. From Point-based to Feature-based Aerial Triangulation. ISPRS J. Photogramm. Remote Sens. 2004, 48, 315–329. [Google Scholar]
  50. Zalmanson, G. Hierarchical Recovery of Exterior Orientation from Parametric and Natural 3D Curves. Ph.D. Dissertation, Department of Civil and Environmental Engineering and Geodetic Science, Ohio State University, Columbus, OH, USA, 2000. [Google Scholar]
  51. Besl, P.; McKay, N. A Method for Registration of 3-D Shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar]
  52. Rabbani, T.; Dijkman, S.; Heuvel, F.; Vosselman, G. An Integrated Approach for Modeling and Global Registration of Point Clouds. ISPRS J. Photogramm. Remote Sens. 2007, 61, 355–370. [Google Scholar]
  53. Akav, A.; Zalmanson, G.H.; Doytsher, Y. Linear Feature Based Aerial Triangulation. XXth ISPRS Congress, Istanbul; Turkey Commission III. 2004. [Google Scholar]
  54. Lin, H.T. Autonomous Recovery of Exterior Orientation of Imagery using Free-form Linear Features. Ph.D. Dissertation, Department of Civil and Environmental Engineering and Geodetic Science, Ohio State University, Columbus, OH, USA, 2002. [Google Scholar]
  55. Gruen, A.; Akca, D. Least Squares 3D Surface and Curve Matching. ISPRS J. Photogramm. Remote Sens. 2005, 59, 151–174. [Google Scholar]
  56. Wang, H.; Kearney, J.; Atkinson, K. Arc-length Parameterized Spline Curves for Real-time Simulation. Proceedings of the 5th International Conference on Curves and Surfaces, San Malo, France; 2002; pp. 387–396. [Google Scholar]
  57. Guenter, B.; Parent, R. Computing the Arc Length of Parametric Curves. IEEE Comput. Graph. Appl. 1990, 10, 72–78. [Google Scholar]
  58. Nasri, A.H.; van Overveld, C.W.A.M.; Wyvill, B. A Recursive Subdivision Algorithm for Piecewise Circular Spline. Comput. Graph. Forum 2001, 20, 35–45. [Google Scholar]
  59. Ji, Q.; Costa, S.; Haralick, M.; Shapiro, L. A Robust Linear Least Squares Estimation of Camera Exterior Orientation using Multiple Geometric Features. ISPRS J. Photogramm. Remote Sens. 2000, 55, 75–93. [Google Scholar]
Figure 1. Natural cubic splines.
Figure 1. Natural cubic splines.
Sensors 09 09629f1
Figure 2. The projection of a point on a spline.
Figure 2. The projection of a point on a spline.
Sensors 09 09629f2
Figure 3. Different examples. (a) Known EOPs with tie splines, (b) Known EOPs with partial control splines, (c) Known EOPs with full control splines, (d) Unknown EOPs with partial control splines, and (e) Unknown EOPs with full control splines. (Red: Unknown parameters, Green: Partially fixed parameters, Blue: Fixed parameters).
Figure 3. Different examples. (a) Known EOPs with tie splines, (b) Known EOPs with partial control splines, (c) Known EOPs with full control splines, (d) Unknown EOPs with partial control splines, and (e) Unknown EOPs with full control splines. (Red: Unknown parameters, Green: Partially fixed parameters, Blue: Fixed parameters).
Sensors 09 09629f3
Figure 4. Six image block.
Figure 4. Six image block.
Sensors 09 09629f4
Figure 5. Natural cubic spline.
Figure 5. Natural cubic spline.
Sensors 09 09629f5
Figure 6. Test images. (a) Image 762, (b) Image 764, (c) Image 766, and (d) Target area.
Figure 6. Test images. (a) Image 762, (b) Image 764, (c) Image 766, and (d) Target area.
Sensors 09 09629f6
Table 1. Number of unknowns.
Table 1. Number of unknowns.
EOPSplineNumber of unknowns
Known EOPTie spline12(n–1) + t
Partial control spline9(n–1) + t
Full control splinet
Unknown EOPPartial control spline6m + 9(n–1) + t
Full control spline6m + t
Table 2. EOPs of six bundle block images for simulation.
Table 2. EOPs of six bundle block images for simulation.
ParameterXC[m]YC[m]ZC[m]ω [deg]φ [deg]κ [deg]
Image 13000.004002.00503.000.11460.05735.7296
Image 23305.004005.00499.000.14320.0859−5.7296
Image 33610.003995.00505.000.17190.45842.8648
Image 43613.004613.00507.000.2865−0.0573185.6383
Image 53303.004617.00493.00−0.14320.4011173.0333
Image 62997.004610.00509.00−0.1833−0.2865181.6276
Table 3. Spline parameter and spline location parameter recovery.
Table 3. Spline parameter and spline location parameter recovery.
Spline location parameters
Image 1Image 2Image 3
t1t7t2t8t3t9
ξ00.020.330.090.410.160.47
ξ̂0.0415±0.00460.3615±0.00160.0917±0.00170.4158±0.00320.1412±0.00430.4617±0.0135
Image 4Image 5Image 6
t4t10t5t11t6t12
ξ00.180.510.250.520.330.57
ξ̂0.2174±0.00980.4974±0.00790.2647±0.08170.5472±0.03170.3133±0.01270.6157±0.1115
Spline parameters
a10a11a12a13a10a11
ξ03322.1772.16−45.1427.154377.3369.91
ξ̂3335.0080 ±0.000470.4660 ±0.0585−48.8529 ±0.831016.5634 ±1.20834343.0712 ±0.000463.0211 ±0.0258
b12b13c10c11c12c13
ξ0−17.4913.6848.8210.15–27.6321.90
ξ̂−28.7770 ±0.21939.8893 ±0.206751.9897 ±0.00068.1009 ±0.0589−39.3702 ±0.713913.3904 ±1.0103
Table 4. Partial spline parameter and spline location parameter recovery.
Table 4. Partial spline parameter and spline location parameter recovery.
Spline location parameters
Image 1Image 2Image 3
t1t7t2t8t3t9
ξ00.040.360.090.400.140.45
ξ̂0.0525±0.00670.3547±0.00200.1128±0.00470.4157±0.00910.1575±0.00280.4543±0.0083
Image 4Image 5Image 6
t4t10t5t11t6t12
ξ00.210.500.270.540.310.61
ξ̂0.1916±0.00370.5128±0.00870.2563±0.00440.5319±0.00560.2961±0.01390.6239±0.1147
Spline parameters
a11a12a13b11b12b13
ξ075.14−52.8730.7170.05−40.3310.98
ξ̂71.7099 ±0.0795−47.2220 ±0.6872−15.8814 ±2.643962.3703 ±0.0579−28.7260 ±0.64737.1137 ±1.7699
c11c12c13
ξ00.82−30.7210.51
ξ̂7.1198 ±0.9483−35.3841 ±1.34038.1557 ±3.5852
Table 5. Spline location parameter recovery.
Table 5. Spline location parameter recovery.
Spline location parameters
Image 1Image 2
t1t7t13t2t8t14
ξ00.010.370.630.090.440.71
ξ̂0.0589±0.00150.3570±0.00760.6712±0.01970.1134±0.00720.4175±0.00540.7069±0.0080
Image 3Image 4
t3t9t15t4t10t16
ξ00.170.460.740.210.490.81
ξ̂0.1757±0.00310.4784±0.00710.7631±0.00950.2039±0.01020.4869±0.00300.8122±0.0044
Image 5Image 6
t5t11t17t6t12t18
ξ00.260.530.840.290.610.89
ξ̂0.2544 ±0.00500.5554 ±0.00690.8597 ±0.00890.3151 ±0.00950.6284 ±0.00520.9013 ±0.0086
Table 6. EOP and spline location parameter recovery.
Table 6. EOP and spline location parameter recovery.
EOPs
ParameterXC[m]YC[m]ZC[m]ω [deg]φ [deg]κ [deg]
Image 1ξ03007.844001.17501.818.7090−9.7976−12.5845
ξ̂3001.5852 ±0.01544001.2238 ±0.0215503.2550 ±0.1386–0.8908 ±0.38950.3252 ±0.13516.0148 ±0.8142
Image 2ξ03308.174001.17497.5210.238.3144−5.5004
ξ̂3305.1962 ±0.38044004.9827 ±0.1785501.2641 ±0.2489−0.1247 ±0.0308−0.5497 ±0.0798−5.2858 ±0.4690
Image 3ξ03612.683993.37506.325.27317.2581−10.135
ξ̂3611.8996 ±0.12263995.7891 ±0.0695505.1299 ±0.03370.1486 ±0.44670.1192 ±0.01682.3372 ±0.0794
Image 4ξ03619.754612.78506.886.2571−5.3482183.66
ξ̂3612.7128 ±0.02584613.0145 ±0.01895507.0654 ±0.0251−0.0921 ±0.7485−0.152 ±0.4505184.5016 ±0.2289
Image 5ξ03301.844618.63497.61−6.17317.5182187.7145
ξ̂3302.8942 ±0.04674617.0538 ±0.0249492.9424 ±0.0704−0.6347 ±0.14130.2662 ±0.8006171.9808 ±0.6445
Image 6ξ02999.594615.74508.49−7.1651−4.8427185.1057
ξ̂2997.9827 ±0.05134610.1432 ±0.0249509.2952 ±0.0401−0.1360 ±0.5659−0.1279 ±0.6225183.1789 ±0.2271
Spline location parameters
Image 1Image 2
t1t7t13t19t2t8t14t20
ξ00.040.280.520.760.080.320.560.80
ξ̂0.0432 ±0.00330.2980 ±0.00120.5176 ±0.00390.7705 ±0.00770.0813 ±0.00820.3338 ±0.00410.5715 ±0.00390.8136 ±0.0069
Image 3Image 4
t3t9t15t21t4t10t16t22
ξ00.120.360.600.840.160.400.640.88
ξ̂0.01294 ±0.00360.3578 ±0.00920.6024 ±0.00460.8437 ±0.00790.1594 ±0.01150.4112 ±0.00570.6418 ±0.00290.9783 ±0.0037
Image 5Image 6
t5t11t117t23t6t12t18t24
ξ00.200.440.680.920.240.480.720.96
ξ̂0.2039 ±0.00570.4461 ±0.01250.6713 ±0.00800.9264 ±0.00610.2483 ±0.00850.4860 ±0.00730.7181 ±0.00840.9613 ±0.0079
Table 7. EOP, control, and tie spline parameter recovery.
Table 7. EOP, control, and tie spline parameter recovery.
EOPs
ParameterXC[m]YC[m]ZC[m]ω [deg]φ [deg]κ [deg]
Image 1ξ03014.874007.18500.790.9740−8.65177.2155
ξ̂3000.5917 ±0.00114001.8935 ±0.0059503.2451 ±0.1572−0.0974 ±0.14320.4297 ±0.09746.6005 ±0.2807
Image 2ξ03315.374008.57503.31−8.4225−3.32327.2766
ξ̂3305.1237 ±0.00574005.0571 ±0.0043498.8916 ±0.0784−0.5214 ±0.3610−0.1948 ±0.1375−6.1421 ±0.5558
Image 3ξ03613.853991.17508.37−1.37515.37834.3148
ξ̂3609.5400 ±0.15763995.1419 ±0.0803505.1791 ±0.04284.5378 ±5.49471.1746 ±0.36102.2288 ±0.4870
Image 4ξ03618.464617.61503.188.55412.4287182.7735
ξ̂3613.1988 ±0.05994612.8281 ±0.0206507.2056 ±0.04721.1803 ±0.2578–0.4068 ±0.2979185.7014 ±0.1089
Image 5ξ03305.714620.37491.17−8.7148−5.1487183.1114
ξ̂3302.9716 ±0.07184617.0808 ±0.0592492.9357 ±0.0660−0.6990 ±0.10871.0485 ±0.1437172.8671 ±0.2137
Image 6ξ03002.724613.63491.228.54755.0124178.2353
ξ̂2996.9737 ±0.03154610.8773 ±0.0672509.3724 ±0.0027−3.2888 ±0.06880.5672 ±0.3837182.2693 ±0.2478
Control spline location parameters
Image 1Image 2Image 3
t1t7t13t2t8t14t3t9t15
ξ00.040.360.660.110.410.710.160.460.76
ξ̂0.0597 ±0.01730.3495 ±0.00850.6518 ±0.00650.0982 ±0.00740.4085 ±0.00960.7087 ±0.00670.1494 ±0.00940.4499 ±0.00890.7564 ±0.0156
Image 4Image 5Image 6
t4t10t16t5t11t17t6t12t18
ξ00.190.510.790.240.540.860.290.590.91
ξ̂0.2018 ±0.00430.4984 ±0.00780.8065 ±0.00960.2573 ±0.00860.5586 ±0.00680.8553 ±0.01100.3172 ±0.00880.6137 ±0.00780.8958 ±0.0085
Tie spline location parameters
Image 1Image 2Image 3
t1t7t13t2t8t14t3t9t15
ξ00.030.340.670.090.390.710.140.470.73
ξ̂0.0680 ±0.00730.3577 ±0.00670.6694 ±0.00330.1141 ±0.01160.4032 ±0.00730.6937 ±0.00540.1495 ±0.01240.4599 ±0.00750.7618 ±0.0054
Image 4Image 5Image 6
t4t10t16t5t11t17t6t12t18
ξ00.210.490.810.260.560.830.310.580.92
ξ̂0.1975 ±0.00260.5109 ±0.00190.8068 ±0.02160.2488 ±0.07730.5733 ±0.00270.8527 ±0.01380.3308 ±0.00340.6142 ±0.01150.9018 ±0.0317
Tie spline parameters
a10a11a12a13b10b11
ξ03341.4473.13−48.32−20.724337.4956.97
ξ̂3335.0147 ±0.001271.2914 ±0.0478−47.5124 ±0.7959−14.8527 ±1.86684342.0369 ±0.000962.4762 ±0.0804
b12b13c10c11c12c13
ξ0−36.552.5744.163.65−28.227.99
ξ̂−28.0982 ±0.48516.8679 ±1.421951.5228 ±0.00086.8220 ±0.0421−36.9681 ±1.921513.0338 ±2.0048
Table 8. Information about aerial images used in this study.
Table 8. Information about aerial images used in this study.
Vertical aerial photograph
Data9 July 1985
OriginKMS
Focal length87.75 mm
Photo scale1:150,000
Pixel size12 μm
Scanning resolution12 μm
Ground sampling distance1.9 m
Table 9. Spline parameter and spline location parameter recovery.
Table 9. Spline parameter and spline location parameter recovery.
Spline location parameters
Image 762Image 764
t1t4t7t2t5t8
ξ00.080.380.720.220.530.82
ξ̂0.0844±0.00460.4258±0.00580.6934±0.00720.2224±0.01750.5170±0.01040.8272±0.0156
Image 766
t3t6t9
ξ00.320.590.88
ξ̂0.3075±0.00970.6176±0.01480.9158±0.0080
Spline parameters
a10a11a12a13
ξ0535000.00830.00−150.0050.00
ξ̂535394.1732±0.1273867.6307±0.7142−173.1357±7.654024.3213±21.3379
b10b11b12b13
ξ07671000.00150.00140.00−300.00
ξ̂7672048.3173±0.2237143.1734±1.6149130.8147±10.9058−290.1270±26.7324
c10c11c12c13
ξ00.00−10.00−50.0050.00
ξ̂2.1913±0.0547−3.7669±0.1576−39.8003±9.157227.7922±19.6787
Table 10. Spline location parameter recovery.
Table 10. Spline location parameter recovery.
Spline location parameters
Image 762Image 764Image 766
t1t4t2t5t3t6
ξ00.150.600.300.750.450.90
ξ̂0.1647±0.00480.6177±0.00910.2872±0.00340.7481±0.00930.4362±0.01550.9249±0.0087
Table 11. EOP and spline location parameter recovery.
Table 11. EOP and spline location parameter recovery.
EOPs
ImageXC[m]YC[m]ZC[m]ω [deg]φ [deg]κ [deg]
762ξ0547000.007659000.0014000.003.84722.124891.8101
ξ̂547465.37 ±15.09117658235.41 ±13.027813700.25 ±5.47140.3622 ±0.81480.5124 ±0.178491.5124 ±0.1717
764ξ0546500.007670000.0013500.000.11250.612890.7015
ξ̂546963.22 ±12.54607672016.87 ±17.147213708.82 ±7.1872−0.3258 ±0.6913−0.5217 ±0.863291.1612 ±1.1004
766ξ0546000.00768000.0013700.001.48715.905292.0975
ξ̂546547.58 ±13.81047685836.75 ±12.148613712.20 ±8.48541.2785 ±1.42180.5468 ±1.195792.9796 ±0.6557
Table 11. EOP and spline location parameter recovery.
Spline location parameters
Image 762Image 764
t1t4t7t10t2t5
ξ00.080.320.560.800.160.40
ξ̂0.0865±0.00970.3192±0.01590.5701±0.00720.8167±0.00880.1759±0.00670.4167±0.0085
Image 764Image 766
t8t11t3t6t9t12
ξ00.640.880.240.480.720.96
ξ̂0.6557±0.01310.8685±0.00920.2471±0.00860.4683±0.00690.7251±0.01410.9713±0.0089

Share and Cite

MDPI and ACS Style

Lee, W.H.; Yu, K. Bundle Block Adjustment with 3D Natural Cubic Splines. Sensors 2009, 9, 9629-9665. https://doi.org/10.3390/s91209629

AMA Style

Lee WH, Yu K. Bundle Block Adjustment with 3D Natural Cubic Splines. Sensors. 2009; 9(12):9629-9665. https://doi.org/10.3390/s91209629

Chicago/Turabian Style

Lee, Won Hee, and Kiyun Yu. 2009. "Bundle Block Adjustment with 3D Natural Cubic Splines" Sensors 9, no. 12: 9629-9665. https://doi.org/10.3390/s91209629

Article Metrics

Back to TopTop