Next Article in Journal
Background Registration-Based Adaptive Noise Filtering of LWIR/MWIR Imaging Sensors for UAV Applications
Previous Article in Journal
Mixed H2/H-Based Fusion Estimation for Energy-Limited Multi-Sensors in Wearable Body Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Vanishing Point Extraction and Refinement for Robust Camera Calibration

1
Department of Civil Engineering, National Central University, Taoyuan City 32001, Taiwan
2
Center for Space and Remote Sensing Research, National Central University, Taoyuan City 32001, Taiwan
*
Author to whom correspondence should be addressed.
Sensors 2018, 18(1), 63; https://doi.org/10.3390/s18010063
Submission received: 5 October 2017 / Revised: 20 November 2017 / Accepted: 19 December 2017 / Published: 27 December 2017
(This article belongs to the Section Remote Sensors)

Abstract

:
This paper describes a flexible camera calibration method using refined vanishing points without prior information. Vanishing points are estimated from human-made features like parallel lines and repeated patterns. With the vanishing points extracted from the three mutually orthogonal directions, the interior and exterior orientation parameters can be further calculated using collinearity condition equations. A vanishing point refinement process is proposed to reduce the uncertainty caused by vanishing point localization errors. The fine-tuning algorithm is based on the divergence of grouped feature points projected onto the reference plane, minimizing the standard deviation of each of the grouped collinear points with an O(1) computational complexity. This paper also presents an automated vanishing point estimation approach based on the cascade Hough transform. The experiment results indicate that the vanishing point refinement process can significantly improve camera calibration parameters and the root mean square error (RMSE) of the constructed 3D model can be reduced by about 30%.

1. Introduction

Camera calibration is an important step in both photogrammetry and computer vision in order to extract metric information from two-dimensional (2D) images. Calibration includes interior orientation parameters (IOPs), e.g., focal length, principal point, lens distortion, skew, and aspect ratio, as well as exterior orientation parameters (EOPs), e.g., camera orientation and position. Various camera calibration methods have been developed using three-dimensional (3D) reference objects, 2D planes, or even lines [1]. Traditional camera calibration can be achieved using Tsai’s camera calibration model [2] or planar patterns [3]. This strategy only requires a known, planar calibration grid to estimate the IOPs and EOPs of the camera. Sturm and Maybank [4] summarized the singularities of calibration from one viewpoint with one or two planes. However, the calibration grid may not be easy to find and place properly, especially for in situ measurements.
Using vanishing points is an efficient process to obtain the camera pose directly in the scene by extracting parallel and perpendicular lines [5]. The geometric property of vanishing points has been well-defined in much of the photogrammetry literature. The principal point of the camera coincides with the orthocenter of the triangle, whose vertices are the three vanishing points for three orthogonal directions [6]. These lines commonly appear in man-made structures, for instance, rectangular windows, floor lines, columns, and beams, and are useful for detecting vanishing points. However, how to calculate precise and accurate vanishing points is a great challenge because any deviation will cause error propagation to camera calibration and subsequent object reconstruction processes [7].
This study developed an effective and flexible camera calibration method, which is particularly useful for on-site calibration, based on vanishing point extraction and refinement. The geometric relations between vanishing points and the camera system are defined according to collinearity condition equations. The developed algorithms require no prior camera parameters, nor internal or external parameters for onsite calibration. The proposed vanishing point refinement algorithm can reduce the uncertainty of vanishing point localization errors.

2. Vanishing Point Estimation

Projecting detected line segments in the image plane onto the Gaussian sphere is one of the classic approaches [8,9] for detecting vanishing points. Each line can be represented as a circle using angular parameterization (azimuth and elevation) of the Gaussian sphere. Vanishing points appear as the intersections of these circles, which represent the high occurrence rate of a particular element. Thales’ theorem can optimize the position of reference points [10] to overcome the projection center location problem in the Gaussian sphere. The Thales’ circle ensures that any line segment passing through the vanishing point must be perpendicular to the principal point in an isosceles triangle. The optimal triangle area minimization can then be achieved using least-squares techniques, and the accuracy and automation can be improved using random sample consensus (RANSAC) [11,12,13]. The Hough transform is a well-known method for detecting parametrical structures in images [14]. A double-cascaded Hough transform approach was introduced [15] to overcome the intrinsic limitations that prevent the extraction of the line segments along the main directions. In order to reduce the error and identify the possible points of intersection, a voting scheme was proposed based on a set of rules that weights each pair of intersected line segments in relation to their geometric characteristics [16].
An iterated Hough transform method was proposed to help find vanishing points and lines [17,18]. The method investigated a bounded slope–intercept parametric representation by splitting the original unbounded space into three bounded subspaces in order to keep the symmetry intact. It also employed a filtering algorithm before applying the second Hough transform to help extract important information emerging in each Hough space. Also based on the cascade Hough transform, a filtering and validation algorithm was implemented to cluster the line segments and estimate the vanishing points simultaneously [19,20].
Instead of using double transformation, another approach was to work directly on the first Hough polar plane [21]. By searching for a sinusoidal curve with appropriate amplitude and phase parameters, the least-squares minimization was applied with a weighting ratio. The ratio considered the number of times that the parameter set was observed as a mapping point of a line in the image. Similarly, Cantoni et al. [22] applied a filtering algorithm directly on the image plane after the first Hough transformation. This threshold-based filter works efficiently on edge-detected images, but the camera must be perpendicular to the reference plane and the horizontal line (vanishing line) should be parallel to the X axis. Some researchers combined fuzzy clustering algorithms to separate an image into several regions [23]. For each region, vanishing lines and the vanishing point can be located using the Hough-based method individually. This can help extract local vanishing points from specific objects.
Besides using transformation-based parameter estimation, the grouping together of features that satisfy a geometric relationship can also be used to detect and estimate vanishing points and lines. For example, McLean and Kotturi [24] integrated image processing and analysis algorithms to produce a method for practical feature extraction. In their method, the use of histogram analysis, clustering, and numerical optimization to locate vanishing points eliminates the need for any a priori estimates of the number or location of vanishing points. In addition, including a line quality measure allows large line data sets to be used without decreasing the overall quality of the vanishing point estimates, further increasing the degree of automation.
There are three common types of geometric grouping [25], which are: (1) a family of equally spaced coplanar parallel lines; (2) a planar pattern obtained by repeating some elements translating in the plane; and (3) a set of elements arranged in a regular planar grid. The presence or absence of geometric constraints is strong evidence for or against hypotheses such as parallelism in the real world. Almansa et al. [26] developed a detection algorithm that deduced the Gaussian sphere from the Helmoltz principle proposed by [27]. They divided the image plane into radial vanishing regions, and used minimum description length to restrict the number of false vanishing points. However, this approach works only when the vanishing point is not located within the image boundary. The direct measurement of the raw image can be simplified [28] using a RANSAC line model and expectation maximization (EM) [29], and the J-linkage clustering algorithm [30]. Nonetheless, lens distortion and strong image noise still degrade the performance of the line extraction and grouping process.
For real-time vanishing point detection, the local dominant orientation signature (LDOS) descriptor was introduced [31] to extract structural features directly from the image domain. The descriptor divides an image into several square blocks and accumulates the edge magnitude for each of them. The candidate vanishing blocks can be estimated by comparing the spatial distances from neighboring blocks containing the perspective lines with a similar direction (orientation).

3. Proposed Method for Vanishing Point Estimation

The proposed vanishing point estimation method consists of three parts. They are: image pre-processing, feature detection, and vanishing point localization.

3.1. Image Pre-Processing

The objective of pre-processing is to extract enough line segments for initial vanishing point detection. It is also useful for line-based radial distortion correction [32]. Firstly, straight line segments with sub-pixel accuracy are extracted using the Canny edge detector [33] with an additional linking and merging process. Merging aligned edges by orthogonal regression can increase the accuracy of their location and orientation.
An improved cascade Hough transformation approach is proposed to extract line segments from the edge pixels and to classify them to the probable vanishing point candidates. The two steps of Hough transform are illustrated in Figure 1. The first Hough transform extracts line segments from the edge pixels. The initial vanishing point localization is processed using the output from the first Hough transform to group the line segments passing through the same region on the image. The details of the two Hough transforms are described in Section 3.2 and Section 3.3, respectively.

3.2. Feature Line Detection

The first Hough transform is commonly used to detect line segments in the image by keeping the dominant peaks in the normal-distance and normal-angle ( ρ - ϑ ) space. The parameterization of the Hough transformation is based on the orthogonal distance ρ of the line to the origin and the direction ϑ of the normal to the line. Each pixel p ( x p , y p ) forms a sinusoidal curve on the ρ - ϑ space:
ρ = x p cos ϑ + y p sin ϑ .
Thus, a set of points that form a straight line will produce sinusoids which cross at the specific ρ - ϑ for that line. Therefore, finding collinear points on the image can be converted to the problem of finding accumulated peak in the ρ - ϑ space.
Short lines or falsely detected edges will significantly decrease the accuracy of line clustering and vanishing point calculation. That makes feature line detection and filtering indispensable. A voting scheme is used to select candidate peaks from the accumulated histogram for collinearity detection in ρ - ϑ space. For the best results, this study uses the inverted pyramid pattern iterative calculation for ρ - ϑ parameters, and the iteration stops when detected vanishing points are stable, as explained in Section 3.3.

3.3. Initial Vanishing Point Localization

According to the detected peaks in the first Hough transform, a second transformation for those peaks is employed to identify line segments passing through the same point (or small area) on the image. The local maximum peaks in the first Hough space appear to be collinear because that specific pixel (possible vanishing point) in the image contributes to all of the ρ - ϑ parameters’ accumulators.
Two conditions for obtaining stable vanishing points are considered. The first is the number of line segments in each direction, i.e., adjusting line group number threshold to prevent most detected lines from pointing to a certain direction. The other is that the representative vanishing points should be stable under different ρ - ϑ parameters. Modifying ρ - ϑ parameters can increase the reliability for vanishing point calculation.
Afterward, similarity rectification is applied to identify different line segments. Then, a least-squares method is employed to trace line groups interactively by adjusting the threshold of histogram peaks until the number of line groups is satisfied. Finally, vanishing points are calculated according to the grouped line segments and optimized with iterative calculation.
Figure 2a is an example of an input image. Two groups of dots are marked as rectangles and triangles, respectively. Each set of four points forms a line and the lines pass through two intersect points marked as dots. Figure 2b illustrates the result after the first Hough transformation, in which candidate peaks are marked as squares and triangles forming a line. An example of a voting scheme is shown in Figure 2c, and the number of each accumulator represents how many lines are passing through it. High peaks extracted from Figure 2b are transformed as lines in the second Hough transform as demonstrated in Figure 2d. The intersection of the lines are marked as points representing their groups in rectangles and triangles, respectively.
If necessary, a third Hough transform can be applied to the peaks of the second one to detect collinear vanishing points. These kinds of features can be used to construct vanishing lines.

4. Camera Calibration Using Vanishing Points

Vanishing point-based calibration is considered as one of the most practical calibration methods. The collinearity condition equations utilize the geometric position of a perspective center, an image point, and its corresponding object point as follows,
x p = f M 00 ( X p X c ) + M 01 ( Y p Y c ) + M 02 ( Z p Z c ) M 20 ( X p X c ) + M 21 ( Y p Y c ) + M 22 ( Z p Z c ) + x 0 y p = f M 10 ( X p X c ) + M 11 ( Y p Y c ) + M 12 ( Z p Z c ) M 20 ( X p X c ) + M 21 ( Y p Y c ) + M 22 ( Z p Z c ) + y 0 ,
where x p and y p are the image coordinates of a point; X p , Y p , and Z p are the object space coordinates; X c , Y c , and Z c are the coordinates of the perspective center; f is the principal distance; x 0 and y 0 are the coordinates of principle point; and M 00 M 22 are the elements of the 3 × 3 rotation matrix M, consisting of three rotation angles: ω (pan), ϕ (tilt), and κ (swing) [34]:
M = cos ω cos κ + sin ω sin ϕ sin κ sin ω cos κ cos ω sin ϕ sin κ cos ϕ sin κ sin ω sin ϕ cos κ cos ω sin κ cos ω sin ϕ cos κ sin ω sin κ cos ϕ cos κ sin ω cos ϕ cos ω cos ϕ sin ϕ .
The camera and object coordinate systems are illustrated with geometric relations between vanishing points, the image plane, and the center of the camera in Figure 3.
Under the assumption of well-calibrated lens distortion of the perspective system, f , x 0 , and y 0 , and the three rotation angles ω , ϕ , and κ can be estimated using collinearity condition equations and three mutually orthogonal vanishing points. V x , V y , and V z are the three vanishing points intersected from the parallel lines along the “X, Y, Z” axes in the object space, respectively [35]. Therefore, one can assume the vanishing points are at the infinity place in the object space. For example, at X V x , the vanishing point V x ( x V x , y V x ) following Equation (2) can be rewritten as:
x V x = f M 00 ( X V x X c ) + M 01 ( Y V x Y c ) + M 02 ( Z V x Z c ) M 20 ( X V x X c ) + M 21 ( Y V x Y c ) + M 22 ( Z V x Z c ) + x 0 = f M 00 ( X V x X V x X c X V x ) + M 01 ( Y V x X V x Y c X V x ) + M 02 ( Z V x X V x Z c X V x ) M 20 ( X V x X V x X c X V x ) + M 21 ( Y V x X V x Y c X V x ) + M 22 ( Z V x X V x Z c X V x ) + x 0 = f M 00 M 20 + x 0 = f cos ω cos κ sin ω sin ϕ sin κ sin ω cos ϕ + x 0 ,
y V x = f M 10 M 20 + y 0 = f cos ω sin κ sin ω sin ϕ cos κ sin ω cos ϕ + y 0 .
Similarly, assuming Y V y and Z V z , V y ( x V y , y V y ) and V z ( x V z , y V z ) can be derived as
x V y = f M 01 M 21 + x 0 = f sin ω cos κ cos ω sin ϕ sin κ cos ω cos ϕ + x 0 y V y = f M 11 M 21 + y 0 = f sin ω sin κ cos ω sin ϕ cos κ cos ω cos ϕ + y 0 ,
x V z = f M 02 M 22 + x 0 = f cos ϕ sin κ sin ϕ + x 0 y V z = f M 12 M 22 + y 0 = f cos ϕ cos κ sin ϕ + y 0 .
From Equation (4) to Equation (7), the three vanishing points are only related to f , x 0 , and y 0 and the three rotation angles ω ,   ϕ , and κ . These six unknowns can be solved using V x ( x V x , y V x ) , V y ( x V y , y V y ) and V z ( x V z , y V z ) .

4.1. Camera Orientation Calibration

A pair of vanishing points can be used to define a vector (vanishing line), and thus, three vectors can be found from the combination of three vanishing points:
L V x V y = ( x V y x V x , y V y y V x ) L V y V z = ( x V z x V y , y V z y V y ) L V z V x = ( x V x x V z , y V x y V z ) .
The slope (m) of these three vanishing lines can be calculated as
m V x V y = y V y y V x x V y x V x = sin κ cos κ ,
m V y V z = y V z y V y x V z x V y = cos ω cos κ + sin ω sin ϕ sin κ cos ω sin κ sin ω sin ϕ cos κ ,
m V z V x = y V x y V z x V x x V z = sin ω cos κ + cos ω sin ϕ sin κ sin ω sin κ cos ω sin ϕ cos κ .
Hence, κ can be determined from m V x V y as shown in Equation (9).
Angle ω and ϕ are therefore estimated from the multiplication and division of Equations (10) and (11), rewritten as Equations (12) and (13), respectively.
sin ϕ 2 = ( m V y V z sin κ cos κ ) ( m V z V x sin κ + cos κ ) ( m V y V z cos κ + sin κ ) ( m V z V x cos κ + sin κ ) ,
tan 2 ω = ( m V y V z sin κ cos κ ) ( m V z V x cos κ + sin κ ) ( m V z V x sin κ + cos κ ) ( m V y V z cos κ + sin κ ) .

4.2. Camera IOP Calibration

Each vanishing point and the principle point can also be used to define a vector, therefore, three vectors can be found from three vanishing points:
L O V y = ( x V y x O , y V y y O ) L O V z = ( x V z x O , y V z y O ) L O V x = ( x V x x O , y V x y O ) .
The orthocenter of the triangle formed from the three vanishing points of the three mutually orthogonal directions identifies the principal point through the inner product of the segments of the triangle and its heights. For instance, the inner product of L V y V z and L O V x is equal to zero due to the perpendicularity, and is same as the inner product of L V z V x and L O V y . Hence, the principal point can be solved by expanding these two simultaneous equations.
The focal length, f, can be computed afterwards as the square root of the product of the distances from the principal point to any of the triangle’s vertices and the opposite side:
A r e a = 1 2 x V x y V x 1 x V y y V y 1 x V z y V z 1 = 1 2 ( f 2 sin ω cos ω sin ϕ cos 2 ϕ ) ,
f = 2 A r e a · ( sin ω cos ω sin ϕ cos 2 ϕ ) .
According to the derivation above, the standard procedure of the three vanishing point-based camera calibration starts from the rotation angle estimation (Equations (9)–(11)), followed by principle point calculation, and finally, the focal length from Equation (16). Six unknowns thus can be solved with a unique solution.
If and only if the image is un-cropped and captured by a pinhole camera, the lens distortion calibration can also be achieved using vanishing points. The most commonly encountered lens distortion is radial distortion, including barrel and pincushion distortion. The standard model is formulated as:
x = x d + k 1 ( x d x o ) r d 2 + k 2 ( x d x o ) r d 4 y = y d + k 1 ( y d y o ) r d 2 + k 2 ( y d y o ) r d 4 r d = ( x d x o ) 2 + ( y d y o ) 2 ,
where x d and y d are the corresponding image coordinates with distortion; k 1 , and k 2 are the coefficients of radial distortion, and r d is the distorted radius.
To find the distortion parameters k 1 and k 2 , this study follows the fundamental property of the perspective camera model. Vanishing points provide an useful constraint to estimate the radial distortion parameters using the line-fitting adjustment of an image point observed from the corresponding vanishing point. The observed image lines are constrained to converge to their corresponding vanishing point V ( x V , y V ) according to the following equation:
( x x V ) cos θ + ( y y V ) sin θ = 0 .
Include the symmetric radial distortion parameters k 1 and k 2 into Equation (18), and it becomes:
( x d + ( x d x o ) ( k 1 ( x d 2 y d 2 ) + k 2 ( x d 2 y d 2 ) 2 ) x V ) cos θ + ( y d + ( y d y o ) ( k 1 ( x d 2 y d 2 ) + k 2 ( x d 2 y d 2 ) 2 ) y V ) sin θ = 0 .
When x o and y o are obtained, the line best-fit parameters of k 1 and k 2 can be estimated using a least median of squares (LMedS) procedure.

5. Vanishing Point Refinement

The vanishing points are imaginary points an infinite distance away from the projection center. Therefore, no direct measurement can be achieved to locate the exact locations of the vanishing points. It is difficult to extract vanishing points without random or systematic errors, especially in the cases of images with weak perspective geometry (e.g., long focal length). Consequently, increasing the reliability of the vanishing points’ positions is an important task for vanishing point-based camera calibration. The proposed vanishing point refinement process described here minimizes both random and systematic errors based on the constraints derived from common geometric properties of man-made structures. For instance, feature points pertaining to the same (flat) roof, building base, or floor etc. should have the same height or planar coordinates, however, biases may occur because of computational errors. The systematic error from the perspective of projection consistency thus provides an indication for fine-tuning the best positions of the vanishing points.

5.1. Feature Point Selection and Base Point Estimation

To estimate the perspective projection consistency using vanishing points, it is necessary to find sufficient feature points and their corresponding base points on the reference plane. The most common features of artificial structures are corner points at the intersected edges, planes, or boundaries. Thus, detection of feature points from the extracted long edges is more reliable than from the raw image. Short segments and small closed polygons from detected segments can be ignored because most of them are windows, patterns, or minor structures. The candidate feature points are then detected using Harris corner detector [36]. Some of the geometry constraints can be used for filtering feature point candidates. The first task is to define a reference plane with a reference origin and vanishing points along the X and Y axes, where origin (O) is normally formulated as,
O = V x + V y + V z 3 .
The reference origin is defined as the intersection point from the bottom edges along the X and Y axes of the main structure. Candidate feature points below the reference plane or collinear to others can also be removed. However, the proposed procedure requires user interaction to make the final selection. The next task is to estimate base points. A base point is the vertical projection of a feature point onto the reference plane. It is a necessary element to estimate the consistency of perspective projection constructed from vanishing points. However, most base points are hidden in the image because of self-occlusion; only a few of them may have the potential to be extracted directly from the raw image. For estimating the corresponding base points, the proposed process is based on the characteristics of vanishing point constraints, and is an automatic and robust solution. Figure 4 illustrates a procedure for predicting base points. The extracted feature points are marked in round blue dots in Figure 4a. Following the assumption of the collinearity of the feature point (a) and the base point (b), the search area can be one-dimensional along the a V z ¯ . The task now is simplified into finding the horizontal location of the base point on a V z ¯ .
The estimation process can be generalized into three steps. First, all feature points are projected onto the Y–Z plane according to V x and V z . The projected feature points are marked in red triangles as illustrated in Figure 4b. Points with the same height level and the same Y coordinate should overlap at the same projection point. Similarly, feature points can also be projected onto the X-Z plane using V y and V z . Secondly, the red triangles can be further projected onto the Y or X axis, as noted in green squares in Figure 4c along V z to locate the Y coordinate of each feature point. Finally, candidate positions of the base points located on the line are linked from green squares to V x (or V y ) as shown in Figure 4d defining the horizontal locations of the base points. The intersection points to the lines in Figure 4d and a V z ¯ are the estimated locations of the base points (red circles in Figure 4e) that are corresponding to feature points according to the path record. In Figure 4f, the green lines represent the target heights between the feature points and their corresponding base points which will be determined in the following procedure.

5.2. Vanishing Point Fine-Tuning

The error of each set of grouped projection point during the base points estimation process can be minimized by fine-tuning the positions of vanishing points. The vanishing point localization errors will cause the displacement during the projection process; therefore, the calibration results normally include systematic errors. Feature points with the same height and Y coordinate should be perfectly projected onto the same projection point as shown in Figure 5a, and points with the same Y coordinates should also overlap with the same position marked in Figure 5b.
The divergences in the first and second projection steps provide useful information for vanishing point refinement. The more precisely the vanishing point positions are estimated, the fewer the divergences that may occur during the projection process.
To decide the fine-tuning values and orders, a moving pixel pyramid and half-and-half adjustment strategy was developed in the proposed algorithm. The objective is to minimize the standard deviation of each of the clustered projection points,
min i , j ( a ˙ a ˙ ¯ ) k ,
where i , j are the fine-tuning pixels in the image space for each vanishing point; and a ˙ is the projected feature point in each step that belongs to group k with a mean value of a ˙ ¯ . Every fine-tuned pixel will update the standard deviation for each group. However, the traditional moving pixel approach takes O ( N 2 ) interations for each vanishing point, where N is the number of moving pixels along horizontal and vertical axes. The proposed moving pixel pyramid is a coarse-to-fine approach, fine-tuning the vanishing points from large pixel spans to the sub-pixel level with O ( 1 ) computational complexity. The fine-tuning begins with larger pixel spans to locate a coarse area with the lowest standard deviation value. Then, the span pixel value is reduced to zoom in to a smaller area, until the iteration ends with the sub-pixel leveled fine-tuning. Figure 6 demonstrates an example of the pyramid fine-tuning approach, in which the initial vanishing point is in the center, and the searching boundary is from 50 to + 50 pixels on both directions from top-left [−50,−50] to bottom-right [50,50]. The first fine-tuning span is of 20 pixels. After estimation and update of the vanishing point position 25 times, the process zooms in to the next level with a span value of four pixels. The same procedure continues and the vanishing point can be fine-tuned to an ideal position after 5 2 + 5 2 + 2 2 + 2 2 = 58 iterations instead of 100 2 = 10 , 000 iterations using the traditional pixel-by-pixel searching approach.
The fine-tuning process is based on statistic estimations, and it is difficult to determine which vanishing point localization displacement affects the overall errors most. In case of weak perspective geometry, one of the vanishing points may contain larger error than the others, but the fine-tuning process may reduce the error caused by the vanishing point which should not be adjusted. The proposed half-and-half adjustment strategy can reduce the error caused by a large vanishing point displacement. This strategy first adjusts the half-distance of each vanishing point from the original position to the optimized position. Which adjustment provides the greatest contribution should be determined and then that specific vanishing point should be fully adjusted towards the optimized position.
Geometrically, modifying V x and V y along the vertical direction in image space changes the vanishing line slope. Varying the slope of a vanishing line means the adjustment of the reference plane for optimizing all feature points perpendicular to it. An incorrect slope of the vanishing line estimation will cause a tapering effect shrinking to one side of the vanishing point and enlarging on the other side to the line segments that should have the same length. The horizontal change of V x and V y will resize the area formed by three vanishing points, which refers to the focal length calibration as mentioned in Equation (16).

6. Results and Discussion

A computer-simulated model test case (Figure 7a) was used to demonstrate the developed vanishing point estimation and refinement process step by step. The simulation image was generated using Trimble Sketch Up software. After creating the 3D model, it was output to an image with the perspective matrix projection camera module. Strong edge pixels were extracted from the raw image and converted into first Hough space for line detection. The high peaks in the first Hough transform were extracted with local maximum suppression, removing duplicated candidates due to over-segmentation. The extracted peaks represent the line equations in the image space. To clarify which lines belong to which vanishing point, the high peaks are further transformed in the second Hough space. The transformed peaks intersect at the same bin if they belong to the same vanishing point in the image space. The classified result (Figure 7c) consists of three groups of detected line equations marked with different colors.
The initial vanishing points are then used with selected feature points for searching for corresponding base points. A step-by-step base point estimation process is illustrated in Figure 8. All selected blue feature points are first projected onto the Y Z plane from V x , marked as red triangles (Figure 8a). Those red triangles are further projected to the direction of V z onto the Y axis, marked as green triangles (Figure 8b). Finally, the intersection of a V z ¯ and lines linked from green triangles to V x indicate the base points (Figure 8c).
Figure 9 displays three enlarged parts of the projection process from Figure 8d. Several projection lines passing through V z are not well-overlapped. There appears to be a systematic misalignment due to vanishing point displacements.
Figure 10 demonstrates the estimated base points located on the referenced plane. Several points should have been identically overlapped on the same coordinates. However, the error during the initial vanishing point estimation caused the projecting errors in each projection step.
The proposed fine-tuning algorithm was applied to reduce the divergences. Figure 11 compares the conventional moving pixel-based (Figure 11a) and the proposed coarse-to-fine approaches (Figure 11b,c). This test case used a 10 × 10 coarse-to-fine fine-tuning moving pixel pyramid. The sampling spans were of 25, 10, 3, and 0.5 pixels, respectively for each level.
The fine-tuning is based on the statistic value from local (red triangles) or global (green triangles) divergences. Reducing the standard deviation for red triangle groups will decrease the local error of each point group. Minimizing the standard deviation for red triangles, however, may increase the global error. For instance, two red triangle groups at different heights should be in the same green triangle group. Fine-tuning the vanishing points may lead to divergences when projecting these two red triangle groups on the Y axis.
To validate the robustness of the proposed refinement process, IOPs and EOPs were calculated with several additional offset errors manually put on to the initial vanishing point V x (Table 1). The listed results show that the EOP differences are of less than 0 . 1 and the IOP differences are less than 3 pixels—a significant decrease in errors and a consistent improvement of the 3D point measurement accuracy. Figure 12, Figure 13 and Figure 14 display the robustness of the refinement process, demonstrating that the proposed refinement is capable of reducing the uncertainty of the initial vanishing point estimation.
Figure 15 was extracted from a video sequence with a dimension of 704 × 480 pixels in JPEG format. The estimated radial distortion coefficients k 1 and k 2 are 2 . 267 7 and 1 . 273 12 , respectively, calibrated using straight line segments. Figure 15b shows the extracted lines with feature points (a to g) of the targeted building. This case assumed the back side of the buildings have the same X Y coordinates as the targeted structure on the left (Feature a, b and c).
Classified line segments are then used to estimate the initial location of the vanishing points. Because the tilt angel is low in this case, the vanishing point in the vertical (Z) direction is far away from the center (Figure 16). Table 2 lists the calibrated camera parameters with and without the proposed vanishing point refinement process using both a raw image and a lens distortion-calibrated image.
The reconstructed model was compared with field-surveyed data for quantitative analysis of the accuracy (Table 3), which also evaluated accuracy improvement of the building height estimation after the refinement of vanishing points. Using the measured distance (17.4 m) between feature point d and its corresponding base point as the reference, the maximum error of feature point f is about 3%. After the vanishing point refinement, not only were feature points of the same level correctly assigned with identical heights, but the overall RMSE decreased to less than 0.7%. Validations of 3D point measurements (Table 4) were compared with field-surveyed data from tape and laser measurements.

7. Conclusions

This paper presented a novel camera calibration approach based on vanishing point geometry. The proposed algorithms can be used to obtain reliable camera parameters without prior information and are particularly useful for onsite camera calibration. The proposed algorithms can also deal with the uncertainty of vanishing point calculation, that may significantly affect the camera IOPs/EOPs estimation. The main contribution of this study is the proposed vanishing point refinement strategy, which can significantly reduce the systematic and random errors stemming from the vanishing point localization. The fine-tuning process can minimize the projection error of each feature point after a few iterations using the half–half adjustment. A coarse-to-fine fine-tuning approach is also proposed to improve the processing efficiency from O ( n 2 ) to O ( 1 ) . To extract and group line segments for initial vanishing point estimation, this study improved cascade Hough transformation with adaptive thresholds. Extracted line segments are more robustly classified to the corresponding vanishing point.
Experiment results shown in this paper also demonstrate the robustness of the proposed refinement approach under high initial vanishing point estimation errors, improving 3D point reconstruction accuracy by 30% and keeping the estimated camera parameters consistent under additional vanishing point localization errors. A video frame case evaluated the improvement of the proposed vanishing point refinement process. The height measurement error was reduced from 2.04% to 0.64%. The proposed algorithms can be implemented on in situ camera calibration, single view metrology, and simultaneous localization and mapping (SLAM) applications in the human-made environment. Future improvements will focus on the integration into the SLAM system as a real-time camera pose tracking attribute. The developed calibration strategy also has a great potential for implementation with panoramic and omnidirectional cameras.

Acknowledgments

This study was supported, in part, by the Ministry of Interior of Taiwan (ROC) under project numbers SYC1050122 and SYC1060303.

Author Contributions

F. Tsai and H. Chang conceived and designed the experiments; H. Chang performed the experiments and analyzed the data; H. Chang and F. Tsai wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.

References

  1. Zhang, Z. Camera calibration. In Computer Vision; Springer: Berlin, Germany, 2014; pp. 76–77. [Google Scholar]
  2. Tsai, R. A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE J. Robot. Autom. 1987, 3, 323–344. [Google Scholar] [CrossRef]
  3. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
  4. Sturm, P.F.; Maybank, S.J. On plane-based camera calibration: A general algorithm, singularities, applications. In Proceedings of the 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Fort Collins, CO, USA, 23–25 June 1999; Volume 1. [Google Scholar]
  5. Caprile, B.; Torre, V. Using vanishing points for camera calibration. Int. J. Comput. Vis. 1990, 4, 127–139. [Google Scholar] [CrossRef]
  6. Gracie, G. Analytical photogrammetry applied to single terrestrial photograph mensuration. In Proceedings of the XIth International Congress of Photogrammetry, Lausanne, Switzerland, 8–20 July 1968. [Google Scholar]
  7. Chang, H.; Tsai, F. Reconstructing Three-Dimensional Specific Curve Building Models from a Single Perspective View Image. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, 39, 101–106. [Google Scholar] [CrossRef]
  8. Barnard, S.T. Interpreting perspective images. Artif. Intell. 1983, 21, 435–462. [Google Scholar] [CrossRef]
  9. Shufelt, J.A. Performance evaluation and analysis of vanishing point detection techniques. IEEE Trans. Pattern Anal. Mach. Intell. 1999, 21, 282–288. [Google Scholar] [CrossRef]
  10. Brauer-Burchardt, C.; Voss, K. Robust vanishing point determination in noisy images. In Proceedings of the 15th International Conference on Pattern Recognition, Barcelona, Spain, 3–7 September 2000; Volume 1, pp. 559–562. [Google Scholar]
  11. Kalantari, M.; Jung, F.; Guedon, J. Precise, automatic and fast method for vanishing point detection. Photogramm. Record 2009, 24, 246–263. [Google Scholar] [CrossRef]
  12. Gonzalez-Aguilera, D.; Gomez-Lahoz, J. From 2D to 3D through modelling based on a single image. Photogramm. Record 2008, 23, 208–227. [Google Scholar] [CrossRef]
  13. Bazin, J.C.; Pollefeys, M. 3-line RANSAC for orthogonal vanishing point detection. In Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura-Algarve, Portugal, 7–11 October 2012; pp. 4282–4287. [Google Scholar]
  14. Duda, R.O.; Hart, P.E. Use of the Hough transformation to detect lines and curves in pictures. Commun. ACM 1972, 15, 11–15. [Google Scholar] [CrossRef]
  15. Lutton, E.; Maitre, H.; Lopez-Krahe, J. Contribution to the determination of vanishing points using Hough transform. IEEE Trans. Pattern Anal. Mach. Intell. 1994, 16, 430–438. [Google Scholar] [CrossRef]
  16. Gamba, P.; Mecocci, A.; Salvatore, U. Vanishing point detection by a voting scheme. In Proceedings of the International Conference on Image, Lausanne, Switzerland, 19 September 1996; Volume 1, pp. 301–304. [Google Scholar]
  17. Tuytelaars, T.; Proesmans, M.; Van Gool, L. The cascaded Hough transform as support for grouping and finding vanishing points and lines. In International Workshop on Algebraic Frames for the Perception-Action Cycle; Springer: Berline, Germany, 1997; pp. 278–289. [Google Scholar]
  18. Tuytelaars, T.; Van Gool, L.; Proesmans, M.; Moons, T. The cascaded Hough transform as an aid in aerial image interpretation. In Proceedings of the Sixth International Conference on Computer Vision, Bombay, India, 7 January 1998; pp. 67–72. [Google Scholar]
  19. Tsai, F.; Chang, H. Detection of Vanishing Points Using Hough Transform for Single View 3D Reconstruction. In Proceedings of the 34th Asian Conference on Remote Sensing, Bali, Indonesia, 20–24 October 2013; Volume 2, pp. 1182–1189. [Google Scholar]
  20. De la Escalera, A.; Armingol, J.M. Automatic Chessboard Detection for Intrinsic and Extrinsic Camera Parameter Calibration. Sensors 2010, 10, 2027–2044. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Matessi, A.; Lombardi, L. Vanishing point detection in the hough transform space. In European Conference on Parallel Processing; Springer: Berline, Germany, 1999; pp. 987–994. [Google Scholar]
  22. Cantoni, V.; Lombardi, L.; Porta, M.; Sicard, N. Vanishing point detection: representation analysis and new approaches. In Proceedings of the 11th International Conference on Image Analysis and Processing, Palermo, Italy, 26–28 September 2001; pp. 90–94. [Google Scholar]
  23. Zhao, Y.X.; Tai, H.P.; Fang, S.J.; Chou, C.H. A new validity measure and fuzzy clustering algorithm for vanishing-point detection. In Proceedings of the International Conference on Automatic Control and Artificial Intelligence (ACAI 2012), Xiamen, China, 3–5 March 2012; pp. 195–198. [Google Scholar]
  24. McLean, G.; Kotturi, D. Vanishing point detection by line clustering. IEEE Trans. Pattern Anal. Mach. Intell. 1995, 17, 1090–1095. [Google Scholar] [CrossRef]
  25. Schaffalitzky, F.; Zisserman, A. Planar grouping for automatic detection of vanishing lines and points. Image Vis. Comput. 2000, 18, 647–658. [Google Scholar] [CrossRef]
  26. Almansa, A.; Desolneux, A.; Vamech, S. Vanishing point detection without any a priori information. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 502–507. [Google Scholar] [CrossRef]
  27. Desolneux, A.; Moisan, L.; Morel, J.M. Edge detection by Helmholtz principle. J. Math. Imaging Vis. 2001, 14, 271–284. [Google Scholar] [CrossRef]
  28. Košecká, J.; Zhang, W. Video compass. In European Conference on Computer Vision; Springer: Berlin, Germany, 2002; pp. 476–490. [Google Scholar]
  29. Wildenauer, H.; Vincze, M. Vanishing point detection in complex man-made worlds. In Proceedings of the 14th International Conference on Image Analysis and Processing, Modena, Italy, 10–14 September 2007; pp. 615–622. [Google Scholar]
  30. Tardif, J.P. Non-iterative approach for fast and accurate vanishing point detection. In Proceedings of the IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 9 September–2 October 2009; pp. 1250–1257. [Google Scholar]
  31. Choi, J.; Kim, W.; Kong, H.; Kim, C. Real-time vanishing point detection using the Local Dominant Orientation Signature. In Proceedings of the 3DTV Conference: The True Vision-Capture, Transmission and Display of 3D Video (3DTV-CON), Antalya, Turkey, 16–18 May 2011; pp. 1–4. [Google Scholar]
  32. Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  33. Canny, J. A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, PAMI-8, 679–698. [Google Scholar] [CrossRef]
  34. Haralick, R.M. Using perspective transformations in scene analysis. Comput. Graph. Image Process. 1980, 13, 191–221. [Google Scholar] [CrossRef]
  35. Wang, L.L.; Tsai, W.H. Computing camera parameters using vanishing-line information from a rectangular parallelepiped. Mach. Visi. Appl. 1990, 3, 129–141. [Google Scholar] [CrossRef]
  36. Harris, C.; Stephens, M. A combined corner and edge detector. In Proceedings of the Alvey Vision Conference, Manchester, UK, September 1988; Volume 15, p. 50. [Google Scholar]
Figure 1. The procedure of initial vanishing point detection using double Hough transformation.
Figure 1. The procedure of initial vanishing point detection using double Hough transformation.
Sensors 18 00063 g001
Figure 2. An example of a double Hough transform, (a) raw image: two group of dots are marked as rectangles and triangles; (b) first Hough transform finding the collinear points; (c) an example of voting scheme: the numbers of each accumulator represent how many lines are passing through it; (d) high peaks extracted from (b) are transformed as lines in the second Hough transform.
Figure 2. An example of a double Hough transform, (a) raw image: two group of dots are marked as rectangles and triangles; (b) first Hough transform finding the collinear points; (c) an example of voting scheme: the numbers of each accumulator represent how many lines are passing through it; (d) high peaks extracted from (b) are transformed as lines in the second Hough transform.
Sensors 18 00063 g002
Figure 3. An illustration of the camera (exposure station), 3D point (object), and its photo image, which all lie on a straight line.
Figure 3. An illustration of the camera (exposure station), 3D point (object), and its photo image, which all lie on a straight line.
Sensors 18 00063 g003
Figure 4. An example procedure of base point prediction. (a) Detected feature points (blue dots). (b) According to the V x and V z , feature points are projected onto the Y-Z plane. The projections for the feature points are marked in red triangles. (c) V z is used to project red triangles onto the Y axis (green squares). (d) Green squares are recorded and linked to V x . (e) The intersection points of the lines linked from feature points to V z are the base points (red points) corresponding to their feature points. (f) Green lines represent the heights between the feature points and base points.
Figure 4. An example procedure of base point prediction. (a) Detected feature points (blue dots). (b) According to the V x and V z , feature points are projected onto the Y-Z plane. The projections for the feature points are marked in red triangles. (c) V z is used to project red triangles onto the Y axis (green squares). (d) Green squares are recorded and linked to V x . (e) The intersection points of the lines linked from feature points to V z are the base points (red points) corresponding to their feature points. (f) Green lines represent the heights between the feature points and base points.
Sensors 18 00063 g004
Figure 5. Demonstration of projection error during the base point searching process, (a) triangles are the first projection points on the Y–Z plane; (b) squares are the second projection points on the Y axis.
Figure 5. Demonstration of projection error during the base point searching process, (a) triangles are the first projection points on the Y–Z plane; (b) squares are the second projection points on the Y axis.
Sensors 18 00063 g005
Figure 6. An example of the coarse-to-fine vanishing point fine-tuning process. The initial vanishing point is in the center. Every iteration reduces the span value and zooms in to a smaller area which has the lowest standard deviation (Equation (21)). This case reduced the calculation iteration from O ( n 2 ) ( 10 , 000 ) to O ( 1 ) ( 58 ) , significantly increasing the processing efficiency.
Figure 6. An example of the coarse-to-fine vanishing point fine-tuning process. The initial vanishing point is in the center. Every iteration reduces the span value and zooms in to a smaller area which has the lowest standard deviation (Equation (21)). This case reduced the calculation iteration from O ( n 2 ) ( 10 , 000 ) to O ( 1 ) ( 58 ) , significantly increasing the processing efficiency.
Sensors 18 00063 g006
Figure 7. Line detection and grouping using the proposed cascade Hough transform. (a) Output image; (b) High peaks (squares) in the first Hough transform are grouped in the second Hough transform; (c) Three groups of lines are marked with green, yellow, and red, respectively.
Figure 7. Line detection and grouping using the proposed cascade Hough transform. (a) Output image; (b) High peaks (squares) in the first Hough transform are grouped in the second Hough transform; (c) Three groups of lines are marked with green, yellow, and red, respectively.
Sensors 18 00063 g007
Figure 8. Base point estimation. (a) Blue feature points projected onto the Y Z plane are marked as red triangles; (b) Red triangles projected onto the Y axis are marked as green triangles; (c) Base points are estimated from the intersection of a V z ¯ and lines linked from green triangles to V x ; (d) The solid blue lines are a b ¯ with unknown height h.
Figure 8. Base point estimation. (a) Blue feature points projected onto the Y Z plane are marked as red triangles; (b) Red triangles projected onto the Y axis are marked as green triangles; (c) Base points are estimated from the intersection of a V z ¯ and lines linked from green triangles to V x ; (d) The solid blue lines are a b ¯ with unknown height h.
Sensors 18 00063 g008
Figure 9. Base point estimation errors in each of the projection steps. The proposed vanishing point refinement process is designed to reduce the misalignment.
Figure 9. Base point estimation errors in each of the projection steps. The proposed vanishing point refinement process is designed to reduce the misalignment.
Sensors 18 00063 g009
Figure 10. Base point estimation errors and fine-tuned results. Green squares on the left are the estimated base points. Several points should have been identically overlapped but are slightly dispersed (enlarged in the center). After vanishing point refinement, the divergences are reduced substantially (right figure).
Figure 10. Base point estimation errors and fine-tuned results. Green squares on the left are the estimated base points. Several points should have been identically overlapped but are slightly dispersed (enlarged in the center). After vanishing point refinement, the divergences are reduced substantially (right figure).
Sensors 18 00063 g010
Figure 11. A coarse-to-fine fine-tuning comparison. The vertical axis shows the accumulated error value. Grid color from warm to cold represents the error from high to low. (a) Traditional pixel-based fine-tuning, covering −50∼50 pixels on the row and column with 100 2 calculation times; (b) The proposed coarse-to-fine approach, covering −250∼250 pixels on the row and column with 4 × 10 2 calculation times; (c) A two-dimensional (2D) view of each searching layer.
Figure 11. A coarse-to-fine fine-tuning comparison. The vertical axis shows the accumulated error value. Grid color from warm to cold represents the error from high to low. (a) Traditional pixel-based fine-tuning, covering −50∼50 pixels on the row and column with 100 2 calculation times; (b) The proposed coarse-to-fine approach, covering −250∼250 pixels on the row and column with 4 × 10 2 calculation times; (c) A two-dimensional (2D) view of each searching layer.
Sensors 18 00063 g011
Figure 12. Camera angle estimation with and without the vanishing point refinement process with different manually added errors on V x .
Figure 12. Camera angle estimation with and without the vanishing point refinement process with different manually added errors on V x .
Sensors 18 00063 g012
Figure 13. Camera principle point and focal length estimation with and without the vanishing point refinement process with different manually added errors on V x .
Figure 13. Camera principle point and focal length estimation with and without the vanishing point refinement process with different manually added errors on V x .
Sensors 18 00063 g013
Figure 14. Three-dimensional (3D) point estimation with and without the vanishing point refinement process with different manually added errors on V x .
Figure 14. Three-dimensional (3D) point estimation with and without the vanishing point refinement process with different manually added errors on V x .
Sensors 18 00063 g014
Figure 15. A test case of a video frame cut of a real building; the resolution is of 704 × 480 pixels. (a) A video frame calibrated using straight line segments; (b) Extracted lines and feature points for vanishing point estimation and 3D modeling.
Figure 15. A test case of a video frame cut of a real building; the resolution is of 704 × 480 pixels. (a) A video frame calibrated using straight line segments; (b) Extracted lines and feature points for vanishing point estimation and 3D modeling.
Sensors 18 00063 g015
Figure 16. An illustration of three orthogonal vanishing points detected from the video frame cut; vanishing points linked to grouped line segments along X , Y , Z direction are marked in red, green, and blue, respectively.
Figure 16. An illustration of three orthogonal vanishing points detected from the video frame cut; vanishing points linked to grouped line segments along X , Y , Z direction are marked in red, green, and blue, respectively.
Sensors 18 00063 g016
Table 1. The performance of the proposed vanishing point refinement algorithm. System error on V x was manually added to validate the reliability under different initial vanishing point conditions.
Table 1. The performance of the proposed vanishing point refinement algorithm. System error on V x was manually added to validate the reliability under different initial vanishing point conditions.
Added
Error (Pixels)
Omega
( )
Phi
( )
Kappa
( )
f
(Pixels)
x 0
(Pixels)
y 0
(pixels)
RMSE_xy
(%)
RMSE_z
(%)
150B.R.52.7222.561.562136152.7208.81.883.02
A.R.52.2322.550.37210645.77790.622.88
100B.R.52.3522.270.972125100.9159.71.313.03
A.R.52.1822.560.35210944.1380.350.62.88
50B.R.51.9521.980.37211446.92111.61.293.11
A.R.52.1822.560.35210944.1380.350.62.88
0B.R.51.5121.70.242102−9.2664.461.763.23
A.R.52.1822.560.35210944.1380.350.62.88
−50B.R.51.0421.430.862092−67.6718.412.423.38
A.R.52.1822.560.35210944.1380.350.62.88
−100B.R.50.5421.161.52081−128.4−26.563.133.56
A.R.52.1722.560.35210844.1280.350.62.88
−150B.R.49.9920.892.152070−191.3−70.453.833.77
A.R.52.2322.540.37210645.78790.622.88
B.R.: before refinement; A.R.: after refinement.
Table 2. The comparison of interior orientation parameters (IOPs) and exterior orientation parameters (EOPs) before and after the proposed vanishing point refinement processing of the raw image and the radial distortion calibrated image.
Table 2. The comparison of interior orientation parameters (IOPs) and exterior orientation parameters (EOPs) before and after the proposed vanishing point refinement processing of the raw image and the radial distortion calibrated image.
Un-CalibratedCalibrated
B.R.A.R.B.R.A.R.
Omega ( )45.2146.3447.9046.75
Phi ( )12.1112.3612.3912.58
kappa ( )5.295.415.795.46
f (pixels)1008101710311047
x 0 (pixels)2.999.1243.0216.32
y 0 (pixels)−49.59−55.41−59.74−57.51
B.R.: before refinement; A.R.: after refinement.
Table 3. Accuracy assessment of the height measurement of a video frame cut (unit: m).
Table 3. Accuracy assessment of the height measurement of a video frame cut (unit: m).
FeatureReferenceNon-RefinedResidualErrorRefinedResidualError
a19.419.1020.2981.54%19.3870.0130.07%
b19.419.1730.2271.17%19.3870.0130.07%
c19.419.0460.3541.82%19.3870.0130.07%
d*17.4
e17.417.1730.2271.30%17.2430.1570.90%
f20.521.136−0.6363.10%20.686−0.1860.91%
g20.521.022−0.5222.55%20.686−0.1860.91%
RMSE 0.412.04% 0.130.64%
Feature d* is the reference height (17.4 m).
Table 4. The error analysis of 3D point measurement in the video frame cut test case (unit: m).
Table 4. The error analysis of 3D point measurement in the video frame cut test case (unit: m).
FeatureReferenceProposed MethodResidual
XYZXYZXYZ
a−3.515.8719.4−3.4215.419.39−0.080.470.01
b−3.56.3319.4−3.426.3119.39−0.080.020.01
c0.56.3319.40.526.3119.39−0.020.020.01
d*0017.4
e18.25017.418.54017.24−0.2900.16
f26.97−1.520.527.46−1.4720.69−0.49−0.03−0.19
g26.97−1.520.527.46−1.4720.69−0.49−0.03−0.19
RMSE 0.210.190.13
Feature d* is the reference height (17.4 m).

Share and Cite

MDPI and ACS Style

Chang, H.; Tsai, F. Vanishing Point Extraction and Refinement for Robust Camera Calibration. Sensors 2018, 18, 63. https://doi.org/10.3390/s18010063

AMA Style

Chang H, Tsai F. Vanishing Point Extraction and Refinement for Robust Camera Calibration. Sensors. 2018; 18(1):63. https://doi.org/10.3390/s18010063

Chicago/Turabian Style

Chang, Huan, and Fuan Tsai. 2018. "Vanishing Point Extraction and Refinement for Robust Camera Calibration" Sensors 18, no. 1: 63. https://doi.org/10.3390/s18010063

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop