Next Article in Journal
Efficient Recognition of Informative Measurement in the RF-Based Device-Free Localization
Previous Article in Journal
Investigation of Micro-volume Viscosity with Janus Microbeads Based on Rotational Brownian Motion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Analysis of Fish-Eye Lens Camera Self-Calibration

1
Department of Civil &Environmental Engineering, Seoul National University, 599 Gwanak-ro 1, Gwanak-gu, Seoul 08826, Korea
2
Department of Civil and Environmental Engineering, Myongji University, 116 Myongji-ro, Cheoin-gu, Yongin, Gyeonggi-do 17058, Korea
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(5), 1218; https://doi.org/10.3390/s19051218
Submission received: 15 January 2019 / Revised: 28 February 2019 / Accepted: 7 March 2019 / Published: 10 March 2019
(This article belongs to the Section Remote Sensors)

Abstract

:
The fish-eye lens camera offers the advantage of efficient acquisition of image data through a wide field of view. However, unlike the popular perspective projection camera, a strong distortion effect appears as the periphery of the image is compressed. Such characteristics must be precisely analyzed through camera self-calibration. In this study, we carried out a fish-eye lens camera self-calibration while considering different types of test objects and projection models. Self-calibration was performed using the V-, A-, Plane-, and Room-type test objects. In the fish-eye lens camera, the V-type test object was the most advantageous for ensuring the accuracy of the principal point coordinates and focal length, because the correlations between parameters were relatively low. On the other hand, the other test objects were advantageous for ensuring the accuracy of distortion parameters because of the well-distributed image points. Based on the above analysis, we proposed, an accurate fish-eye lens camera self-calibration method that applies the V-type test object. The RMS-residuals of the proposed method were less than 1 pixel.

1. Introduction

The fish-eye lens camera has the advantage of having, relative to a conventional optical camera, a wider viewing angle for recording of color (RGB) information within a wide area, specifically by compressively recording the outer part of the camera image relative to the center part. Due to these advantages of the fish-eye lens camera, it has been used for indoor and outdoor 3D modeling, augmented reality, and Simultaneous Localization And Mapping (SLAM) in the fields of remote sensing, surveying, and robotics. Marković et al. [1] and Caruso et al. [2] used a fish-eye lens camera for SLAM and mobile robot implementation, and Sánchez et al. [3] used a fish-eye lens camera to improve the accuracy of urban navigation. Schöps et al. [4] used a fish-eye lens camera to produce a three-dimensional model for a large area. Gao et al. [5] and Yang et al. [6], meanwhile, developed a driver-assisted sensor system consisting of fish-eye lens cameras.
Although the fish-eye lens camera has a wide Field Of View (FOV), it has the disadvantage of strong distortion in the image. Therefore, self-calibration must be employed to correct the image distortion [7,8]. Self-calibration is the process of determining the Interior Orientation Parameters (IOPs) of the sensor. The IOPs include principal point coordinates ( x p , y p ), focal length ( f ), and distortion parameters.
Fish-eye lens distortion is represented by two components, the projection model and lens distortion. Distortion due to the projection model is the intended distortion in the design and fabrication stages of the fish-eye lens camera, which results in a larger distortion effect from the center of the image to the outer edge. On the other hand, the fish-eye lens camera, also, has lens distortion phenomenon, which is the same as conventional optical cameras [9]. Lens distortion models provide interpretations of radial and decentering distortions, which are not explained in the projection model and have been verified through a variety of studies including Brown [10], Beyer [11], and Fraser [12].
There are four typical projection models of fish-eye lens cameras: equidistant, equisolid-angle, orthogonal, and stereographic projection [13,14,15,16]. In addition to the typical projection model, various projection models such as Fish Eye Transform (FET), Polynomial Fish Eye Transform (PFET) [17,18], FOV [19], and Division model [20,21] has been proposed.

1.1. Previous Studies

The fact that each projection model of the fish-eye lens camera can adequately explain the geometric distortion of the fish-eye image has been verified through various self-calibration studies of fish-eye cameras, as follows. Li [22] and Chunyan et al. [23] performed self-calibration of a panoramic camera by applying the equidistant projection model and lens distortion model. Hughes et al. [24] proposed a method to extract the segment information from the image of the equidistant projection fish-eye lens camera to calculate the vanishing point and to remove the distortion effect. Sahin [25] performed a calibration on a mobile phone camera using an equidistant projection model. Schneider et al. [26] fabricated a multi-sensor system consisting of an equisolid-angle projection camera and a lidar and calibrated the involved sensors. Perfetti et al. [27] performed stereoscopic projection fish-eye lens camera calibration and used three-dimensional models of cultural property buildings using the corrected images. Bakstein et al. [28] presented a new projection model combining a stereographic and equisolid-angle projection model and used it to perform a fish-eye lens camera calibration. Xiong et al. [29] used a PFET-lens distortion model to perform self-calibration of a fish-eye lens camera to be used for indoor mobile-robot positioning.
In addition to the verification of each projection model of the fish-eye lens, comparisons between the projection models and the possibility of substituting the projection model also have been explored, as follows. Schneider et al. [30] and Hughes et al. [9] performed self-calibration for various fish-eye lens cameras of different projection models and compared their accuracies. Schneider et al. [30] conducted self-calibration of four representative projection models (equidistant, equisolid-angle, orthogonal, and stereographic projection) and found that when using a model that is not an actual projection model, distortion correction was possible through the adjustment of the lens distortion parameter, but it was confirmed that the principal point coordinates or the focal length, was not calculated accurately. Hughes et al. [9] compared self-calibration accuracy among equidistant, equisolid-angle, orthogonal, stereographic, FET, PFET, FOV, and division projection models. Marcato Junior et al. [8] studied self-calibration when no information about the fish-eye lens camera projection model was provided in advance. The authors considered the perspective projection model and other various fish-eye lens projection models to find the appropriate projection model for their study. They also found that the combination of perspective projection and lens distortion models cannot accurately account for fish-eye lens camera distortion.
The previous studies have served to demonstrate how accurately each projection model can interpret the geometric distortion of a fish-eye lens camera. However, a number of them found a correlation between IOPs and Exterior Orientation Parameters (EOPs) [8,30,31], which has rarely been analyzed in detail.
The correlation between the camera’s orientation parameters is very important indicator of the calibration accuracy, because it has a very large impact on the self-calibration results. High correlation between the orientation parameters significantly decreases the reliability of the calibration result, and the other way around [32,33].
Correlation between orientation parameters is also observed in the case of a normal lens camera (perspective projection camera). In the perspective projection model camera, a calibration methodology for reducing the correlation between parameters has been proposed in various papers [32,33,34,35,36,37,38,39]. Conventional methods lower the correlation between orientation parameters by implementing a network geometry that is advantageous for self-calibration, specifically by adjusting the relative position and orientation between the calibration target and the camera. Representative conventional methods include: (1) using both of landscape and portrait images taken and (2) photographing the target in oblique directions [32].
On the other hand, the fish-eye lens camera differs from the conventional camera in the projection model equation, the way in which the correlation occurs may also be different, which fact can have a significant impact on the self-calibration accuracy of the fish-eye lens camera. Therefore, a careful analysis of the projection model of the fish-eye lens camera, and the effect of the correlation on the self-calibration of the fish-eye lens camera are needed.

1.2. Purpose of Study

The objectives of this study are: (1) to analyze the effect of different image data acquisition method (i.e., change of test object type or camera shooting method) on the correlation between orientation parameters, and (2) to formulate a self-calibration method that can accurately derive the IOPs of the fish-eye lens camera.
The partial derivative of the projection model was performed to determine the interaction effects between the orientation parameters. In this study, it was performed on five projection models, one of which is perspective projection and the other four being representative projection models of the fish-eye lens camera (equidistant, equisolid-angle, orthogonal, and stereographic projection). The derived partial derivatives were used to analyze the causes of different correlations between projection models.
Self-calibration and correlation analysis of camera orientation parameters were performed through simulation. At this stage, one should note that the simulation is carried out using the systematic and stochastic models which have been well-established and verified in the photogrammetric community. The reason for conducting the experiments through simulation is that the simulation test has the following advantages. First, the locations of the camera and test object can be set to exact values in the simulation. In other words, simulation is easy to perfectly control all the involved orientation parameters. Contrarily, in the case of self-calibration using real data, there is a limitation in the accurate setting of the positions of the camera and the test object, because there is an error in the setting of each variable even though it is a sophisticated experiment. Next, simulation can more clearly confirm the accuracy of results. In other words, since the simulation is carried out after setting all the involved variables, the estimated values from self-calibration can be directly compared and analyzed with the preset values. Such comparison is almost impossible in the case of real data. Lastly, simulation can handle different types of test objects, camera shooting positions, and looking angles without any limitation. Eventually, the results from simulation can reduce the time and economic costs of real experiments.
This paper describes the research contents in the following order. Section 2 describes the mathematical model and partial derivatives of the fish-eye lens camera. Section 3 and Section 4 explain the design of a self-calibration simulation and analyze the experimental results. Section 5 proposes a method for securing the accuracy of fish-eye lens camera self-calibration. Finally, Section 6 draws conclusions on the findings of this research and looks ahead to upcoming work.

2. Analysis of the Camera Mathematical Model

This section, first, introduces the mathematical models of five different camera types. Projection and lens distortion model are addressed. Afterwards, the derivation and analysis of the partial derivative for the projection models of the cameras follow.

2.1. Mathematical Model of Camera

The mathematical model of a camera is divided into a projection model equation and a lens distortion model equation. Typical projection models of cameras are perspective (normal lens camera), equidistant, equisolid-angle, orthogonal, and stereographic projection models (fish-eye lens cameras). Each projection model is represented by camera image coordinates, object points, IOPs, and EOPs. The lens distortion model, which explains the distortion caused by the camera lens, is used in both the perspective projection camera and the fish-eye lens camera.

2.1.1. Projection Models

The image coordinates (x, y) of the camera are calculated as shown in Equations (1) to (13) and Figure 1. Where ( x p , y p ) are the coordinates of the principal point, r is the distance from the principal point to the image point, f is the focal length, θ is the incident angle of the object point, (X, Y, Z) are the coordinates of the object point, and (X0, Y0, Z0, ω, φ, κ) are EOPs:
x = x p r R U
y = y p r R V
R = U 2 + V 2
U = m 11 ( X X 0 ) + m 12 ( Y Y 0 ) + m 13 ( Z Z 0 )
V = m 21 ( X X 0 ) + m 22 ( Y Y 0 ) + m 23 ( Z Z 0 )
W = m 31 ( X X 0 ) + m 32 ( Y Y 0 ) + m 33 ( Z Z 0 )
R o t a t i o n   m a t r i x   M ( ω , φ , κ ) = [ m 11 m 12 m 13 m 21 m 22 m 23 m 31 m 32 m 33 ]
If   Perspective   Projection   ( Normal   lens )   r = f tan θ
Else   if   Equidistant   Projection   ( Fish - eye   lens )   r = f θ
Else   if   Equisolid - angle   Projection   ( Fish - eye   lens )   r = f 2 sin θ 2
Else   if   Orthogonal   Projection   ( Fish - eye   lens )   r = f sin θ
Else   if   Stereographic   Projection   ( Fish - eye   lens )   r = f 2 tan θ 2
θ = arctan ( R W )

2.1.2. Lens Distortion Model

The mathematical model of the lens distortion is given by Equations (14) to (18), where Δx and Δy are distortions of the image coordinates x and y. (K1, K2, K3) are the radial lens distortion parameters; (P1, P2) are the decentering distortion parameters; and (A1, A2) are the terms for affinity and shear. r in Equations (8) to (12) can be rewritten as the distance between the image coordinates (x, y) and the principal point coordinates ( x p , y p ) as described in Equation (18).
The mathematical model of a camera is expressed using the projection model Equations (1) and (2) along with the distortion model Equations (14) and (15). Equations (19) and (20) are the final mathematical models of the camera. The image coordinates x and y of the camera are represented by object point coordinates, IOPs, EOPs, and lens distortions:
Δ x = x ¯ ( K 1 r 2 + K 2 r 4 + K 3 r 6 ) + P 1 ( r 2 + 2 x ¯ 2 ) + 2 P 2 x ¯ y ¯ + A 1 x ¯ + A 2 y ¯
Δ y = y ¯ ( K 1 r 2 + K 2 r 4 + K 3 r 6 ) + 2 P 1 x ¯ y ¯ + P 2 ( r 2 + 2 y ¯ 2 )
x ¯ = x x p
y ¯ = y y p
r = x ¯ 2 + y ¯ 2
x = x p r R U + Δ x
y = y p r R V + Δ y

2.2. Projection Model Partial Derivative

In this study, partial derivatives were derived using the camera projection models for the mathematical analysis of the tendency of the correlation change according to the test object and imaging method. Partial derivatives provide information on the effect of one variable on the increase/decrease of the other variable, so that the partial differential value is highly correlated with the correlation between the two variables.
In this study, we determined the combination of the orientation parameters needed to perform the partial differentiation based on the ground and image coordinates as shown in Figure 2. Figure 2a, b show the ground and image coordinate system when EOP κ at 0° and 90°, respectively, where x, y, and z are image coordinates, and X, Y, and Z are ground coordinates.
In Figure 2a, x p , y p , and f change in the same direction as X0, Y0, and Z0, respectively. Also, in Figure 2b, x p , y p , and f change in the same direction as Y0, X0, and Z0, respectively. Therefore, five partial derivatives ( x P X 0 , x P Y 0 , y P X 0 , y P Y 0 and f Z 0 ) were considered in this study.
Equations (21) to (23) are derived from Equations (19) and (20). Afterwards, the abovementioned five partial derivatives are derived from Equations (21) to (23) and shown in Table 1. Where r is equal to r (seen in Equations (8) to (12)) divided by focal length (f):
x p = x + r R U Δ x
y p = y + r R V Δ y
f = R r U ( x p + Δ x x )   or   f = R r V ( y p + Δ y y )   ( r = r f )
More specifically, Equations (24) to (31) were obtained by partial derivation of Equations (21) and (22) with X0 and Y0, and Equation (32) was the result obtained by partial derivation of Equation (23) with Z0. Equations (24) to (27) are partial derivatives when ω, φ, and κ, which are the rotational components of the EOPs, are all set to 0 °, and in Equations (28) to (31), ω and φ are set to 0° and κ is set to 90°. Where X ¯ and Y ¯ are respectively the difference between object point coordinates (X and Y) and EOP (X0 and Y0). Hence, X ¯ = ( X X 0 ) and Y ¯ = ( Y Y 0 ) . Equations (24) to (32) show two important characteristics that suggest that the correlation between the orientation parameters of the perspective projection camera and the fish-eye lens camera may be different.
The details of each characteristic are as follows. First, Equations (24) to (32) all have r and r θ in common. r varies depending on the projection model, which means that the effect of one orientation parameter change on the other orientation parameter differs depending on the projection model. Next, in the perspective projection camera, Equations (25), (26), (28), and (31) can be all zeros, and a partial value of 0 means that there is no correlation between the orientation parameters. In these cases, when κ is 0°, there is no correlation between x p and Y0 or between y p and X0, and when κ is 90°, there is no correlation between x p and X0 or between y p and Y0.

3. Self-Calibration Design

The flow of the self-calibration simulation is shown in Figure 3. First, the test object type and orientation parameters (IOPs and EOPs) were set up, and the simulation datasets were created using the parameters. Next, the self-calibration was performed using the prepared data. Finally, the estimated IOPs were evaluated for accuracy through each predefined orientation parameters.
The four different types of test objects: Plane, V, A, and Room, were used in the self-calibration. The dimensions and characteristics of the test objects are shown in Table 2 and Figure 4. Figure 4a–c show a frontal view of the Plane, V, and A type test objects, respectively, and Figure 4d shows the Room-type test object.
The IOPs and camera specification for the self-calibration simulation are set as shown in Table 3. Each value in Table 3 was set based on a sunnex DSL315 fish-eye lens and a Chameleon3 5.0 MP camera body.
Table 4 shows the image setting configuration for self-calibration simulation designed in this research: (1) Twenty different cases from four different test object types and five projection models were considered for the self-calibration simulation. (2) Eight self-calibration runs (from four set-A types and four set-B types) for each case were carried out. Hence, one hundred and sixty runs were, totally, implemented in this research. (3) Set-A type includes six images taken at κ = 0° and three images taken at κ = 90°. Also, set-B type includes six images taken at κ = 0° and other six images taken at κ = 90°. Figure 5 shows configuration of image acquisition according to different test object types. Image shooting positions and looking angles were determined to produce convergent images and to view planes as many as possible. Also, both of landscape (κ = 0°) and portrait (κ = 90°) images were taken at the same position. Afterwards, different image-sets were made up of the images taken from different camera shooting positions and looking angles.

4. Analysis of Self-Calibration

The accuracy of self-calibration was evaluated for each of the four test object types and five projection models. Accuracy estimates for each case were made through the average of eight results performed using eight image sets. Self-calibration was analyzed through: (i) stability of self-calibration, (ii) correlation between orientation parameters, (iii) accuracy of principal point coordinates and focal length, and (iv) accuracy of distortion parameters.

4.1. Stability of Self-Calibration

Stability of self-calibration was evaluated in three stages: Stable, Unstable, and Divergent, based on whether or not orientation parameters can be solved during the calibration process. The correlation between the orientation parameters was very high, and ‘divergent’ was evaluated when most of the orientation parameters were not solved. Next, when the response was very sensitive to EOPs and divergence occurring according to the combination of images, or when the local optimum problem appeared, it was evaluated as ‘unstable’. Finally, when the calibration was completed without the divergence of the orientation parameters or the local optimal solution, it was evaluated as ‘stable’.
Table 5 shows the stability evaluation results, and it can be seen that the stability of the fish-eye lens camera varies greatly according to the type of test object. In perspective projection, stable self-calibration is performed irrespective of the type of test object, but the fish-eye lens camera cannot have solutions from the A-type and Room-type test object. Next, the Plane-type test object was found to be strongly influenced by EOPs in the calibration of the fish-eye lens camera. In other words, the divergence of the orientation parameters appeared depending on the photographing position, the local optimal solution phenomenon appeared, and the result was unstable.

4.2. Correlation Analysis in Self-Calibration

Table 6, Table 7, Table 8 and Table 9 show the correlation between the orientation parameters in the self-calibration results according to different test object types. The correlation between the orientation parameters was derived after completion (convergence) of the self-calibration. On the other hand, the correlation coefficients came from the first iteration of the self-calibration procedure when it did not provide convergent solutions.
For the perspective projection, at all test object types, the correlation was generally lower than the correlation of the fish-eye lens camera. The maximum correlation value of the perspective projection was shown at f-Z0 when the plane-type test object was used (Table 6), but it was relatively lower than the f-Z0 correlation values of fish-eye projection models.
For the fish-eye lens projection models, the f-Z0 correlation for each test object showed a large difference, and this tendency explains why the stability evaluation was as shown in Table 5. In the case of divergence in the evaluation of stability (fish-eye self-calibration using the A- and Room-type test objects), the correlation between f and Z0 showed a maximum value of 0.99 (Table 8 and Table 9). In the Plane-type test object, which showed unstable results, the f-Z0 correlation was 0.96~0.97 (Table 6). For the V-type test object, which was evaluated as stable, the correlation was as high as 0.45 (Table 7). That is, the lower the correlation of f-Z0, the more stable the self-calibration.

4.3. Accuracy of Principal Point Coordinates and Focal Length

The RMSE of estimated principal point coordinates and focal length were calculated using results of self-calibration which were performed with 8 image-sets for each case. The errors in the estimated principal point coordinates and focal length are shown in Table 10; Table 11, respectively. The cases evaluated as divergence in the stability evaluation were excluded from the evaluation because the orientation parameters diverged and the calibration was not completed normally. The principal point coordinates and focal length showed the largest RMS values of 0.63 and 1.67 pixels, respectively. In other words, both the principal point coordinates and the focal length were estimated with high accuracy.

4.4. Accuracy of Distortion Parameters

The accuracy of the distortion parameters was determined by the difference between true distortion at all pixels and estimated distortion (calculated using the estimated distortion parameters). Table 12 shows the RMS-residuals of lens distortion, which indicates that the distortion parameters have the lowest accuracy in the V-type test object. Table 13 and Figure 6 show the reasons for this.
Table 13 shows the ratio of the image points of all images used in self-calibration to the total area of the image. It indicates that the lower the image-coverage ratio of image points, the larger the distortion parameter error. Therefore, the V-type test object has a relatively low coverage ratio, which was interpreted to be lower than those of the other test objects. The A- and Room-type test objects show higher image coverage than other test objects, but distortion parameters are not calculated as orientation parameters diverge.
Figure 6 shows an example of image point distribution and residuals of distortion for each projection model in the V-type test object. In the figure, ‘Distribution of image points’ shows the distribution of all image points of 12 images (in one image-set) at the same time. The inside of the blue solid line is the region in which the image is recorded. In the row of ‘Residuals of distortion’, green pixels represent residuals of distortion less than 0.5 pixels. And, red pixels indicate the opposite case. This figure shows that the residuals of distortion are relatively high in the region where no image points are distributed.
The reason why the image coverage of the image points in the V-type test object is low is illustrated in Figure 7. It shows images taken from the front of the A- and V-type tests, which were formed of two 20m × 5m Planes. (a) and (c) are perspective projection images, and (b) and (d) are orthogonal projection images. In the case of the A-type test object, the image points are well distributed throughout the image, but the V-type test object shows that the image points are distributed only in the center of the image. As a special case, the V-type test object can improve the distribution of image points when the image is taken closely orthogonal to one side of the test object. But, in that case, a similar geometric relationship is obtained when the image is taken orthogonally to the Plane test object, and the correlation between the parameters increases greatly.
Figure 7 provides additional information that should be considered for real self-calibration. In case of V-type test object, the neighboring image points are very close to each other at the outer part of the image. When such image points appear, it is very difficult to obtain their image coordinates in a real calibration situation, which fact can negatively affect self-calibration results.

4.5. Comprehensive Analysis of Self-Calibration

Table 14 shows the RMS-residuals of the IOPs (principal point coordinates, focal length, and distortion parameters). In the case of the V-type test object, the correlation between the orientation parameters was lower than for the other test object types, but the RMS-residuals of the IOPs was high, because the distribution of image points was not suitable for accurate calculation of the distortion parameters. On the other hand, in the case of the fish-eye lens calibration using the A- and Room-type test objects, divergence appeared due to the high correlation between the orientation parameters.
The analysis of self-calibration according to the test object and projection models of 4.1~4.5 is summarized as follows.
  • For the perspective projection model, the type of test object does not significantly affect the self-calibration accuracy. On the other hand, the fish-eye lens projection models are more influenced by the test object type than the perspective projection model.
  • The V-type test object has the highest effectiveness in resolving the correlation between the orientation parameters of the fish-eye lens camera. On the other hand, the A- or Room-type test object is relatively inadequate to solve the correlation, especially, for f-Z0.
  • For obtainment of the accuracy of the distortion parameters, the V-type test object is inadequate compared with the other test objects. This is because, in the case of the V-type test object, it is difficult to acquire images with evenly distributed image points.
  • The V-type test object has the lowest coverage ratio of image points; on the other hand, the Plane-, A-, and Room-type test objects have relative high ones.

5. Proposed Fish-Eye Lens Camera Self-Calibration Method

In this paper, based on the analysis in Section 4, for proposing an accurate fish-eye lens camera self-calibration method, additional experiments were performed for the four different fish-eye lens projection cameras. The simulation was carried out using V-type test object and three image-sets. The accurate fish-eye lens camera self-calibration method was proposed based on comparative analysis of the accuracy of IOPs and the correlation between parameters.
The additional experiments were carried out as follows. Firstly, the image-sets included both of landscape (κ = 0°) and portrait (κ = 90°) images for lowering the correlation of x p -X0 and y p -Y0. Secondly, the V-type test object was used for reducing the correlation of f-Z0. Lastly, the images viewing two planes of V-type test object, and other ones viewing one plane of its planes, were utilized for estimating distortion parameters accurately. Figure 8 shows the configuration of image acquisition for additional experiments. Table 15, also, explains the image groups shown in Figure 8. The images in Group A (location # 1 to 6) are used for resolving the correlation between parameters. The ones in Group B (location # 7 to 10) and C (location # 11 to 14) are used for estimating accurate distortion parameters. In Group B and C, the images viewing one plane of V-type test object were used to ensure adequate image coverage; this worked similar to the case of the Plane-type test object since only one plane was shown in the images. The images in Group B and C were taken orthogonally and obliquely to the plane, respectively.
The image-sets utilized for the self-calibration are determined from the combination of A, B and C-image group as seen in Table 16. All the image-sets, commonly, includes Group A because it reduces the correlation between IOPs and EOPs (especially, x p -X0, y p -Y0, and f-Z0). Image-set 1 and 2 includes Group B and C, respectively. Image-set 3 includes Group A, B, and C; hence, all image groups. The experiments using these three image-sets will be compared and be analyzed to figure out which image-set more effective in accurate self-calibration.
Table 17, Table 18 and Table 19 show the correlation results derived from the self-calibration using different image-sets. The correlation between IOPs and EOPs was low regardless of the image-set. This was because all the image-sets included Group A’s images. Relatively high correlation values (i.e., 0.70–0.72) came from f-Z0 of image-set 1 for all projection models. On the other hand, Image-sets 2 and 3 show low correlation values. Group B’s images in image-set 1, which were taken orthogonally to the plane, deteriorated the decoupling of correlation of f-Z0. Contrarily, Group C’s images in image-sets 2 and 3, which were taken obliquely to the plane, contributed to the decoupling of correlation of f-Z0.
Table 20 and Table 21 show the absolute errors of the estimated principal point coordinates and focal length, respectively. The accuracies of the principal point coordinates were less than 1 pixel in all the cases as seen in Table 20. In the case of focal length, image-set 1 showed mostly higher errors than other image-set cases (Table 21).
Table 22 shows the RMS-residual of lens distortion. The RMS values of image-set 1 were higher than other image-set cases regardless of the projection models. The worst case came from equidistant model and was 7.48 pixels. In cases of image-sets 2 and 3, the RMS values were significantly improved compared to the case of image-set 1. Table 23 and Figure 9 show the coverage ratio and the distribution of all image points (in one image-set), respectively. As seen in the table and figure, the coverage ratio of image-set 1 was lower than those of image-sets 2 and 3. This was the reason that the residuals of lens distortion were relatively high in the case of image-set 1.
Table 24 shows the RMS-residuals of the IOPs (principal point coordinates, focal length, and distortion parameters). As seen in the table, image-sets 2 and 3 showed good results with less than 0.85 pixels in all projection models. Image-set 3 (using 14 images) provided slightly better results than image-set 2 (using 10 images).
In this paper, based on the analysis of additional self-calibration experiments, we proposed the way of deriving accurate fish-eye lens camera self-calibration results as follows:
  • Images viewing V-type test object frontally, are used in self-calibration. This leads to reducing correlation between parameters (especially, f-Z0).
  • Both of landscape (κ = 0°) and portrait (κ = 90°) images are used in self-calibration. This leads to reducing correlation between parameters (especially, x p -X0 and y p -Y0).
  • Images viewing one plane of V-type test object are used in self-calibration. This ensures adequate image coverage to accommodate lens distortion; hence, it increases accuracies of lens distortion parameters. This effect increases more when dealing with obliques images than orthogonal ones.
  • Such images abovementioned can be included in the self-calibration procedure at the same time; and they derive reliable and accurate IOPs.

6. Conclusions

This study was conducted in three steps: (i) mathematical analysis of fish-eye lens projection and perspective projection model, (ii) self-calibration analysis of fish-eye lens camera and perspective projection camera in each test-bed type, and (iii) proposed fish-eye lens camera self-calibration method based on analysis of (ii) and additional experiments.
In ‘mathematical analysis for projection model’, the mathematical basis that each projection model shows a different correlation tendency was confirmed based on the partial derivatives of the fish-eye lens projection model. In ‘self-calibration accuracy analysis step’, the V-type test-bed was advantageous in resolving the correlation but was ineffective in the analysis of camera lens distortion. On the other hand, in the case of the A- and Room-type test-beds, because of the very high correlation between focal length and Z0, self-calibration has a high probability of divergence.
In ‘proposed fish-eye lens camera self-calibration method’, we proposed the fish-eye lens camera self-calibration method to derive high-accuracy IOPs. The proposed method was conducting self-calibration by using the V-type test object as V- and Plane-type. In other words, the proposed method used the images viewing the V-type test object frontally and one plane of the test object in the self-calibration adjustment. Hence, the method adopted the advantages of V- and Plane-type test object at the same time. The fish-eye lens camera self-calibration performed by the proposed method showed the RMS-residuals of less than 1 pixel.
This study will contribute to the self-calibration of the optical camera in the following ways. The results of this study are significant in that they can be a theoretical and empirical basis for a test-bed design that can improve the accuracy of self-calibration of the fish-eye lens camera. Next, we can derive higher-accuracy IOPs through the proposed self-calibration method.

Author Contributions

K.H.C. and C.K. designed the experiments and wrote paper; K.H.C. performed the experiments. Y.K. made valuable suggestions to improve the quality of the paper.

Acknowledgments

The authors would like to acknowledge a grant from the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT and future Planning (No. NRF-2015R1C1A1A02037511).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Marković, I.; Chaumette, F.; Petrović, I. Moving object detection, tracking and following using an omnidirectional camera on a mobile robot. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014; pp. 5630–5635. [Google Scholar]
  2. Caruso, D.; Engel, J.; Cremers, D. Large-scale direct slam for omnidirectional cameras. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015; pp. 141–148. [Google Scholar]
  3. Sánchez, J.S.; Gerhmann, A.; Thevenon, P.; Brocard, P.; Afia, A.B.; Julien, O. Use of a FishEye camera for GNSS NLOS exclusion and characterization in urban environments. In Proceedings of the 2016 International Technical Meeting of The Institute of Navigation, Monterey, CA, USA, 25–28 January 2016. [Google Scholar]
  4. Schöps, T.; Sattler, T.; Häne, C.; Pollefeys, M. 3D modeling on the go: Interactive 3D reconstruction of large-scale scenes on mobile devices. In Proceedings of the 2015 International Conference on 3D Vision (3DV), Lyon, France, 19–22 October; pp. 291–299.
  5. Gao, Y.; Lin, C.; Zhao, Y.; Wang, X.; Wei, S.; Huang, Q. 3-D Surround View for Advanced Driver Assistance Systems. IEEE Trans. Intell. Transp. Syst. 2018, 19, 320–328. [Google Scholar] [CrossRef]
  6. Yang, Z.; Zhao, Y.; Hu, X.; Yin, Y.; Zhou, L.; Tao, D.J.M.T. Applications. A flexible vehicle surround view camera system by central-around coordinate mapping model. Multimedia Tools Appl. 2018, 1–24. [Google Scholar]
  7. Mei, C.; Rives, P. Single view point omnidirectional camera calibration from planar grids. In Proceedings of the 2007 IEEE International Conference on Robotics and Automation, Roma, Italy, 10–14 April 2007; pp. 3945–3950. [Google Scholar]
  8. Marcato Junior, J.; Moraes, M.V.A.d.; Tommaselli, A.M.G. Experimental assessment of techniques for fisheye camera calibration. Boletim de Ciências Geodésicas 2015, 21, 637–651. [Google Scholar] [CrossRef]
  9. Hughes, C.; Denny, P.; Jones, E.; Glavin, M. Accuracy of fish-eye lens models. Appl. Opt. 2010, 49, 3338–3347. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Brown, D.C. Decentering distortion of lenses. Photogramm. Eng. 1966, 32, 444–462. [Google Scholar]
  11. Beyer, H.A. Accurate calibration of CCD-cameras. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Champaign, IL, USA, 15–18 June 1992; pp. 96–101. [Google Scholar]
  12. Fraser, C.S. Digital camera self-calibration. ISPRS J. Photogramm. Remote Sens. 1997, 52, 149–159. [Google Scholar] [CrossRef]
  13. Miyamoto, K. Fish eye lens. JOSA 1964, 54, 1060–1061. [Google Scholar] [CrossRef]
  14. Ray, S.F. Applied Photographic Optics: Lenses and Optical Systems for Photography, Film, Video, Electronic and Digital Imaging; Focal Press: Oxford, UK, 2002. [Google Scholar]
  15. Abraham, S.; Förstner, W. Fish-eye-stereo calibration and epipolar rectification. ISPRS J. Photogramm. Remote Sens. 2005, 59, 278–288. [Google Scholar] [CrossRef]
  16. Kannala, J.; Brandt, S.S. A generic camera model and calibration method for conventional, wide-angle, and fish-eye lenses. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 1335–1340. [Google Scholar] [CrossRef] [PubMed]
  17. Basu, A.; Licardie, S. Modeling fish-eye lenses. In Proceedings of the 1993 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS ’93), Yokohama, Japan, 26–30 July 1993; pp. 1822–1828. [Google Scholar]
  18. Basu, A.; Licardie, S. Alternative models for fish-eye lenses. Pattern Recognit. Lett. 1995, 16, 433–441. [Google Scholar] [CrossRef]
  19. Devernay, F.; Faugeras, O. Straight lines have to be straight. Mach. Vision Appl. 2001, 13, 14–24. [Google Scholar] [CrossRef]
  20. Brauer-Burchardt, C.; Voss, K. A new algorithm to correct fish-eye-and strong wide-angle-lens-distortion from single images. In Proceedings of the International Conference on Image Processing (Cat. No.01CH37205), Thessaloniki, Greece, 7–10 October 2001; pp. 225–228. [Google Scholar]
  21. Fitzgibbon, A.W. Simultaneous linear estimation of multiple view geometry and lens distortion. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2001, Kauai, HI, USA, 8–14 December 2001; pp. I125–I132. [Google Scholar]
  22. Li, S. Full-view spherical image camera. In Proceedings of the 18th International Conference on Pattern Recognition (ICPR’06), Hong Kong, China, 20–24 August 2006; pp. 386–390. [Google Scholar]
  23. Li, C.; Wang, L.; Lu, X.; Chen, J.; Fan, S. Researches on hazard avoidance cameras calibration of lunar rover. In Proceedings of the International Conference on Space Optics, Rhodes Island, Greece, 4–8 October 2010; p. 8. [Google Scholar]
  24. Hughes, C.; McFeely, R.; Denny, P.; Glavin, M.; Jones, E. Equidistant (fθ) fish-eye perspective with application in distortion centre estimation. Image Vision Comput. 2010, 28, 538–551. [Google Scholar] [CrossRef]
  25. Sahin, C. Comparison and calibration of mobile phone fisheye lens and regular fisheye lens via equidistant model. J. Sens. 2016, 2016, 9379203. [Google Scholar] [CrossRef]
  26. Schneider, D.; Schwalbe, E. Integrated processing of terrestrial laser scanner data and fisheye-camera image data. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2008, 37, 1037–1044. [Google Scholar]
  27. Perfetti, L.; Polari, C.; Fassi, F. Fisheye photogrammetry: tests and methodologies for the survey of narrow spaces. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2017, 42, 573–580. [Google Scholar] [CrossRef]
  28. Bakstein, H.; Pajdla, T. Panoramic mosaicing with a 180/spl deg/field of view lens. In Proceedings of the Third Workshop on Omnidirectional Vision, Copenhagen, Denmark, 2 June 2002; pp. 60–67. [Google Scholar]
  29. Xiong, X.; Choi, B.-J. Position estimation algorithm based on natural landmark and fish-eyes’ lens for indoor mobile robot. In Proceedings of the IEEE 3rd International Conference on Communication Software and Networks (ICCSN), Xi’an, China, 27–29 May 2011; pp. 596–600. [Google Scholar]
  30. Schneider, D.; Schwalbe, E.; Maas, H.-G. Validation of geometric models for fisheye lenses. ISPRS J. Photogramm. Remote Sens. 2009, 64, 259–266. [Google Scholar] [CrossRef]
  31. Urban, S.; Leitloff, J.; Hinz, S. Improved wide-angle, fisheye and omnidirectional camera calibration. ISPRS J. Photogramm. Remote Sens. 2015, 108, 72–79. [Google Scholar] [CrossRef]
  32. Remondino, F.; Fraser, C. Digital camera calibration methods: considerations and comparisons. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2006, 36, 266–272. [Google Scholar]
  33. Schiller, I.; Beder, C.; Koch, R. Calibration of a PMD-camera using a planar calibration pattern together with a multi-camera setup. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2008, 21, 297–302. [Google Scholar]
  34. Beyer, H.A. Geometric and Radiometric Analysis of a CCD-Camera Based Photogrammetric Close-Range System. Ph.D. Thesis, ETH Zurich, Zurich, Switzerland, 1992. [Google Scholar] [CrossRef]
  35. Atkinson, K.B. Close Range Photogrammetry and Machine Vision; Whittles Publ.: London, UK, 1996. [Google Scholar]
  36. Li, M.; Lavest, J.-M. Some aspects of zoom lens camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 1996, 18, 1105–1110. [Google Scholar] [Green Version]
  37. Gruen, A.; Beyer, H.A. System calibration through self-calibration. In Calibration and Orientation of Cameras in Computer Vision; Springer: Berlin/Heidelberg, Germany, 2001; pp. 163–193. [Google Scholar]
  38. Habib, A.; Morgan, M.F. Automatic calibration of low-cost digital cameras. Opt. Eng. 2003, 42, 948–956. [Google Scholar]
  39. Fraser, C.S. Automatic camera calibration in close range photogrammetry. Photogramm. Eng. Remote Sens. 2013, 79, 381–388. [Google Scholar] [CrossRef]
Figure 1. Incident angle ( θ ) in imaging geometry.
Figure 1. Incident angle ( θ ) in imaging geometry.
Sensors 19 01218 g001
Figure 2. Ground and image coordinate system when (a) κ = 0°, (b) κ = 90°.
Figure 2. Ground and image coordinate system when (a) κ = 0°, (b) κ = 90°.
Sensors 19 01218 g002
Figure 3. Flow chart of self-calibration simulation.
Figure 3. Flow chart of self-calibration simulation.
Sensors 19 01218 g003
Figure 4. Test object shapes: (a) Plane-, (b) V-, (c) A-, and (d) Room-type.
Figure 4. Test object shapes: (a) Plane-, (b) V-, (c) A-, and (d) Room-type.
Sensors 19 01218 g004
Figure 5. Configuration of image acquisition according to different test object types: (a) Plane-, (b) V-, (c) A-, and (d) Room-type.
Figure 5. Configuration of image acquisition according to different test object types: (a) Plane-, (b) V-, (c) A-, and (d) Room-type.
Sensors 19 01218 g005
Figure 6. Image point distribution and residuals of distortion.
Figure 6. Image point distribution and residuals of distortion.
Sensors 19 01218 g006
Figure 7. Distribution of image points according to different test object type (A and V).
Figure 7. Distribution of image points according to different test object type (A and V).
Sensors 19 01218 g007
Figure 8. Configuration of image acquisition for additional experiments.
Figure 8. Configuration of image acquisition for additional experiments.
Sensors 19 01218 g008
Figure 9. Distribution of image points.
Figure 9. Distribution of image points.
Sensors 19 01218 g009
Table 1. Partial derivatives considered.
Table 1. Partial derivatives considered.
ω = φ = κ = 0°ω = φ = 0°, κ = 90°
x p X 0 = 1 R 3 ( r X ¯ 2 rR 2 r θ cos θ sin θ X ¯ 2 ) (24) = X ¯ Y ¯ R 3 ( r r θ cos θ sin θ )
(Perspective projection = 0)
(28)
x p Y 0 = X ¯ Y ¯ R 3 ( r r θ cos θ sin θ )
(Perspective projection = 0)
(25) = 1 R 3 ( r Y ¯ 2 rR 2 r θ cos θ sin θ Y ¯ 2 ) (29)
y p X 0 = X ¯ Y ¯ R 3 ( r r θ cos θ sin θ )
(Perspective projection = 0)
(26) = 1 R 3 ( r X ¯ 2 rR 2 r θ cos θ sin θ X 2 ) (30)
y p Y 0 = 1 R 3 ( r Y ¯ 2 rR 2 r θ cos θ sin θ Y ¯ 2 ) (27) = X ¯ Y ¯ R 3 ( r r θ cos θ sin θ )
(Perspective projection = 0)
(31)
f Z 0 = f Z Z 0 ( d r d θ cos θ sin θ r ) (32)
Table 2. Dimensions of the test objects considered.
Table 2. Dimensions of the test objects considered.
Test Object TypeDimension
Plane8 × 3.5
(width (m) × height (m))
Vtwo 6 × 3.5-sized planes
(width (m) × height (m))
Atwo 6 × 3.5-sized planes
(width (m) × height (m))
Room7 × 5 × 3.5
(length (m) × width (m) × height (m))
Table 3. Specification of camera in self-calibration simulation.
Table 3. Specification of camera in self-calibration simulation.
f (mm) x p (mm) y p (mm)Distortion Parameters
K1K2K3P1P2A1A2
2.90.0040.0021−51−73−9−1−5−2−71−52−7
Pixel size (mm)Image Size (pixel)Random Noise (1σ) (pixel)
xy
0.00345244820480.5
Table 4. Image setting configuration for self-calibration simulation.
Table 4. Image setting configuration for self-calibration simulation.
Number of Test Object TypesNumber of Projection ModelsTotal Number of CasesNumber of Self-Calibration Runs for Each Case
4(a)5(b)20 cases ( = 4 ( a ) × 5 ( b ) ) Eight runs from four set-A types and four set-B types
Image-Set TypesNumber of Images
κ = 0 ° κ = 0 ° Total
set-A type639 (= 6 + 3)
set-B type6612 (= 6 + 6)
Table 5. Self-calibration stability.
Table 5. Self-calibration stability.
Projection ModelTest Object Type
Plane TypeV TypeA TypeRoom Type
PerspectiveStableStableStableStable
EquidistantUnstableStableDivergentDivergent
Equisolid-angleUnstableStableDivergentDivergent
OrthogonalUnstableStableDivergentDivergent
StereographicUnstableStableDivergentDivergent
Table 6. Correlation between the orientation parameters (Plane-type test object).
Table 6. Correlation between the orientation parameters (Plane-type test object).
Projection ModelIOPEOP
X0Y0Z0ωφκ
Perspective x p 0.640.130.100.120.520.04
y p 0.140.410.170.230.110.02
f0.100.330.930.270.010.01
Equidistant x p 0.580.120.100.110.180.01
y p 0.140.480.140.130.010.02
f0.110.180.970.130.010.01
Equisolid-angle x p 0.490.120.100.100.160.01
y p 0.140.330.190.190.030.02
f0.120.180.960.130.020.01
Orthogonal x p 0.510.120.100.100.350.02
y p 0.130.290.050.170.010.04
f0.100.310.960.200.010.01
Stereographic x p 0.540.130.100.110.510.02
y p 0.140.370.160.160.010.03
f0.110.310.970.240.020.01
Table 7. Correlation between the orientation parameters (V-type test object).
Table 7. Correlation between the orientation parameters (V-type test object).
Projection ModelIOPEOP
X0Y0Z0ωφκ
Perspective x p 0.280.120.160.120.660.09
y p 0.130.210.050.550.120.10
f0.170.120.490.110.030.01
Equidistant x p 0.070.020.040.130.580.09
y p 0.020.130.030.470.130.10
f0.070.040.430.010.030.01
Equisolid-angle x p 0.070.020.040.130.550.09
y p 0.020.130.020.460.120.09
f0.070.040.440.010.020.01
Orthogonal x p 0.070.020.040.130.550.09
y p 0.020.140.020.560.120.09
f0.070.040.450.010.020.01
Stereographic x p 0.080.020.050.130.580.09
y p 0.030.120.030.570.130.10
f0.070.030.430.010.030.00
Table 8. Correlation between the orientation parameters (A-type test object).
Table 8. Correlation between the orientation parameters (A-type test object).
Projection ModelIOPEOP
X0Y0Z0ωφκ
Perspective x p 0.290.100.130.110.550.02
y p 0.060.350.080.510.080.26
f0.140.060.510.020.060.01
Equidistant x p 0.250.150.140.190.490.16
y p 0.110.300.160.320.200.21
f0.160.170.980.140.070.02
Equisolid-angle x p 0.310.200.100.270.430.25
y p 0.160.320.160.340.160.21
f0.160.160.990.150.070.03
Orthogonal x p 0.250.230.160.330.300.34
y p 0.120.270.110.320.130.19
f0.160.100.980.140.070.05
Stereographic x p 0.300.130.190.130.310.15
y p 0.190.290.180.270.090.11
f0.170.180.990.170.150.03
The correlation coefficients of the fish-eye lens projection came from the first iteration.
Table 9. Correlation between the orientation parameters (Room-type test object).
Table 9. Correlation between the orientation parameters (Room-type test object).
Projection ModelIOPEOP
X0Y0Z0ωφκ
Perspective x p 0.280.160.110.180.550.04
y p 0.060.270.080.300.100.04
f0.110.130.330.110.110.01
Equidistant x p 0.280.120.100.190.420.12
y p 0.110.290.110.280.180.07
f0.110.120.980.110.120.01
Equisolid-angle x p 0.270.110.130.160.450.19
y p 0.160.260.150.260.120.11
f0.140.130.990.150.080.16
Orthogonal x p 0.200.110.150.150.720.09
y p 0.070.230.130.520.150.30
f0.090.060.980.140.140.09
Stereographic x p 0.240.110.130.060.450.09
y p 0.060.250.050.260.120.11
f0.140.130.980.150.080.16
The correlation coefficients of the fish-eye lens projection came from the first iteration.
Table 10. RMSE of principal point coordinates (pixel).
Table 10. RMSE of principal point coordinates (pixel).
Projection ModelTest Object Type
PlaneVARoom
Perspective0.380.270.170.20
Equidistant0.270.44N/AN/A
Equisolid-angle0.230.63N/AN/A
Orthogonal0.240.19N/AN/A
Stereographic0.260.29N/AN/A
Table 11. RMSE of focal length (pixel).
Table 11. RMSE of focal length (pixel).
Projection ModelTest Object Type
PlaneVARoom
Perspective0.390.150.050.08
Equidistant1.170.47N/AN/A
Equisolid-angle1.220.23N/AN/A
Orthogonal1.670.37N/AN/A
Stereographic0.350.30N/AN/A
Table 12. RMS-residuals of lens distortion (pixel).
Table 12. RMS-residuals of lens distortion (pixel).
Projection ModelTest Object Type
PlaneVARoom
Perspective0.651.890.240.26
Equidistant0.754.22N/AN/A
Equisolid-angle0.909.91N/AN/A
Orthogonal0.720.73N/AN/A
Stereographic0.294.47N/AN/A
Table 13. Coverage ratio of image points (mean, %).
Table 13. Coverage ratio of image points (mean, %).
Projection ModelTest Object Type
PlaneVARoom
Perspective85639391
Equidistant73398895
Equisolid-angle70408693
Orthogonal81569093
Stereographic80449096
Table 14. RMS-residuals of IOPs (pixel).
Table 14. RMS-residuals of IOPs (pixel).
Projection ModelTest Object Type
PlaneVARoom
Perspective0.871.990.340.39
Equidistant0.484.28N/AN/A
Equisolid-angle0.4410.15N/AN/A
Orthogonal0.400.75N/AN/A
Stereographic0.354.55N/AN/A
Table 15. Explanation of image groups shown in Figure 8.
Table 15. Explanation of image groups shown in Figure 8.
GroupLocation Number κ   ( ° ) Purpose
A1, 2, 30To reduce correlation of x p -X0, y p -Y0, and f-Z0
4, 5, 690
B7, 80To reduce correlation of x p -X0, and y p -Y0, and to increase image coverage
9, 1090
C11, 120
13, 1490
Table 16. Image-set configuration for additional experiments.
Table 16. Image-set configuration for additional experiments.
Image-SetIncluded Image GroupNumber of Images
1A, B10
2A, C10
3A, B, C (all image groups)14
Table 17. Correlation between the orientation parameters (image-set 1).
Table 17. Correlation between the orientation parameters (image-set 1).
Projection ModelIOPEOP
X0Y0Z0ωφκ
Equidistant x p 0.140.080.070.350.490.01
y p 0.120.170.010.410.320.08
f0.070.010.710.010.010.01
Equisolid-angle x p 0.130.080.060.350.490.00
y p 0.110.170.010.410.320.07
f0.080.010.710.010.010.01
Orthogonal x p 0.130.070.050.340.470.00
y p 0.100.150.010.380.310.05
f0.090.010.720.010.010.01
Stereographic x p 0.160.070.080.350.490.01
y p 0.140.180.020.420.330.09
f0.070.010.700.010.010.01
Table 18. Correlation between the orientation parameters (image-set 2).
Table 18. Correlation between the orientation parameters (image-set 2).
Projection ModelIOPEOP
X0Y0Z0ωφκ
Equidistant x p 0.090.120.110.380.530.10
y p 0.090.130.090.510.360.23
f0.140.010.470.030.020.03
Equisolid-angle x p 0.090.120.110.460.450.17
y p 0.090.130.100.440.430.16
f0.150.010.460.030.020.03
Orthogonal x p 0.120.100.080.410.360.14
y p 0.120.100.080.360.410.08
f0.220.010.460.020.020.02
Stereographic x p 0.090.110.090.520.400.23
y p 0.090.120.070.390.510.11
f0.160.010.470.010.030.01
Table 19. Correlation between the orientation parameters (image-set 3).
Table 19. Correlation between the orientation parameters (image-set 3).
Projection ModelIOPEOP
X0Y0Z0ωφκ
Equidistant x p 0.140.090.150.340.520.11
y p 0.080.170.060.430.310.20
f0.190.000.450.020.030.02
Equisolid-angle x p 0.140.090.150.400.460.16
y p 0.080.170.070.390.360.15
f0.180.000.450.020.020.02
Orthogonal x p 0.160.080.130.340.370.12
y p 0.100.150.070.330.310.11
f0.200.000.440.020.030.02
Stereographic x p 0.150.080.130.360.510.13
y p 0.090.160.050.430.320.21
f0.150.000.450.020.040.02
Table 20. Absolute error of principal point coordinates (pixel).
Table 20. Absolute error of principal point coordinates (pixel).
Projection ModelImage-Set
123
x p y p x p y p x p y p
Equidistant0.270.300.770.130.710.02
Equisolid-angle0.560.060.280.150.440.14
Orthogonal0.250.080.090.400.040.10
Stereographic0.390.120.290.230.010.18
Table 21. Absolute error of focal length (pixel).
Table 21. Absolute error of focal length (pixel).
Projection ModelImage-Set
123
Equidistant1.970.040.04
Equisolid-angle1.250.220.46
Orthogonal1.910.580.16
Stereographic0.610.960.70
Table 22. RMS-residuals of lens distortion (pixel).
Table 22. RMS-residuals of lens distortion (pixel).
Projection ModelImage-Set
123
Equidistant7.480.180.15
Equisolid-angle2.470.340.39
Orthogonal1.210.320.08
Stereographic3.121.500.75
Table 23. Coverage ratio of image points (%).
Table 23. Coverage ratio of image points (%).
Projection ModelImage-Set
123
Equidistant498585
Equisolid-angle498686
Orthogonal699090
Stereographic587777
Table 24. RMS-residuals of IOPs (pixel).
Table 24. RMS-residuals of IOPs (pixel).
Projection ModelImage-Set
123
Equidistant6.120.850.70
Equisolid-angle1.630.280.39
Orthogonal0.380.350.11
Stereographic2.450.780.61

Share and Cite

MDPI and ACS Style

Choi, K.H.; Kim, Y.; Kim, C. Analysis of Fish-Eye Lens Camera Self-Calibration. Sensors 2019, 19, 1218. https://doi.org/10.3390/s19051218

AMA Style

Choi KH, Kim Y, Kim C. Analysis of Fish-Eye Lens Camera Self-Calibration. Sensors. 2019; 19(5):1218. https://doi.org/10.3390/s19051218

Chicago/Turabian Style

Choi, Kang Hyeok, Yongil Kim, and Changjae Kim. 2019. "Analysis of Fish-Eye Lens Camera Self-Calibration" Sensors 19, no. 5: 1218. https://doi.org/10.3390/s19051218

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop