Next Article in Journal
New, Optimized Skin Calorimeter Version for Measuring Thermal Responses of Localized Skin Areas during Physical Activity
Previous Article in Journal
The Adaption of Recent New Concepts in Neural Radiance Fields and Their Role for High-Fidelity Volume Reconstruction in Medical Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on the Optimization Method of Visual Sensor Calibration Combining Convex Lens Imaging with the Bionic Algorithm of Wolf Pack Predation

1
School of Civil Engineering, Qingdao University of Technology, Qingdao 266520, China
2
Shandong Luqiao Group Co., Ltd., Jinan 250014, China
3
School of Transportation, Shandong University of Science and Technology, Qingdao 266590, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(18), 5926; https://doi.org/10.3390/s24185926
Submission received: 23 July 2024 / Revised: 29 August 2024 / Accepted: 10 September 2024 / Published: 12 September 2024
(This article belongs to the Section Sensing and Imaging)

Abstract

:
To improve the accuracy of camera calibration, a novel optimization method is proposed in this paper, which combines convex lens imaging with the bionic algorithm of Wolf Pack Predation (CLI-WPP). During the optimization process, the internal parameters and radial distortion parameters of the camera are regarded as the search targets of the bionic algorithm of Wolf Pack Predation, and the reprojection error of the calibration results is used as the fitness evaluation criterion of the bionic algorithm of Wolf Pack Predation. The goal of optimizing camera calibration parameters is achieved by iteratively searching for a solution that minimizes the fitness value. To overcome the drawback that the bionic algorithm of Wolf Pack Predation is prone to fall into local optimal, a reverse learning strategy based on convex lens imaging is introduced to transform the current optimal individual and generate a series of new individuals with potential better solutions that are different from the original individual, helping the algorithm out of the local optimum dilemma. The comparative experimental results show that the average reprojection errors of the simulated annealing algorithm, Zhang’s calibration method, the sparrow search algorithm, the particle swarm optimization algorithm, bionic algorithm of Wolf Pack Predation, and the algorithm proposed in this paper (CLI-WPP) are 0.42986500, 0.28847656, 0.23543161, 0.219342495, 0.10637477, and 0.06615037, respectively. The results indicate that calibration accuracy, stability, and robustness are significantly improved with the optimization method based on the CLI-WPP, in comparison to the existing commonly used optimization algorithms.

1. Introduction

Machine vision technology has widely penetrated many fields, such as industry, agriculture, medicine, and scientific research. In the image processing process of machine vision, it is often necessary to study the interrelationship between the 3D geometric position of a point on the surface of a space object and its corresponding point in the image [1] and then build the geometric model of camera imaging. The critical parameters of the camera can be obtained through experiments and complex calculations called camera calibration. In short, camera calibration aims to find a suitable mathematical model for converting from 3D to 2D, i.e., converting real-world images into a form amenable to mathematical computation and digital processing.
Currently, the main camera calibration techniques include the direct linear transformation method, the Tsai two-step calibration method, and Zhang Zhengyou’s calibration method [2,3,4,5], among which Zhang’s calibration method has been widely applied in practice. However, it relies on the nonlinear optimization algorithm, which has certain limitations and is prone to fall into local optima, resulting in the unsatisfactory accuracy of the calibration results. Therefore, domestic and foreign researchers have introduced intelligent optimization algorithms into camera calibration to eliminate the influence of initial value and enhance global search capability.
When exploring the application of traditional optimization algorithms in camera calibration techniques, a universal challenge lies in the instability of convergence performance, which often limits the efficiency and accuracy of the calibration process. To address the potential issue of poor convergence in traditional optimization algorithms during camera calibration, several studies [6,7] introduced genetic algorithms into the field of camera parameter calibration. Through complex operations such as gene encoding, decoding, crossover, and mutation, these algorithms aimed to enhance global search capabilities. However, while effective, the complexity of implementation and computational cost of this method cannot be ignored. To simplify the calibration process and accelerate it, the authors of [8,9] proposed a camera intrinsic parameter optimization strategy based on particle swarm optimization (PSO). This algorithm leverages the synergy between particles to approach the optimal solution during iterations. Nevertheless, a potential issue with PSO is the phenomenon of particle convergence, which can significantly reduce the diversity of the solution space, thereby affecting the accuracy and reliability of the calibration results. To tackle the limitation of calibration accuracy when the number of calibration images is limited, the authors of [10] ingeniously combined the advantages of simulated annealing and PSO algorithms. By simulating the temperature drop and equilibrium state search mechanisms in the physical annealing process, this approach aimed to improve the robustness and accuracy of the calibration process. Although this method improved calibration results to some extent, its gradual cooling and multiple iteration strategies also led to a slowdown in the overall convergence speed of the algorithm, increasing execution time. To further overcome the limitations in Zhang’s calibration method, the authors of [11] introduced the sparrow search algorithm (SSA) for camera calibration. The SSA enables information exchange and optimization iteration among individuals by simulating sparrow populations’ foraging and escape behaviors. However, the algorithm’s performance in adaptive adjustment mechanisms and global search capabilities remains to be enhanced, particularly for calibration tasks in complex scenarios.
To enhance the accuracy and efficiency of camera calibration, this paper innovatively introduces the Wolf Pack Predation (WPP) biomimetic algorithm as a calibration method. With its straightforward structure and minimal parameter adjustment requirements, the WPP algorithm significantly simplifies the implementation process and fundamentally reduces the risk of performance fluctuations caused by parameter misconfiguration. The core highlights of the WPP algorithm lie in its built-in adaptive adjustment mechanism and efficient information feedback system, which work in synergy to skillfully balance the algorithm’s local fine optimization and global extensive search capabilities, achieving a leap in calibration accuracy and a notable increase in convergence speed. Furthermore, the WPP algorithm deeply simulates the natural predation behavior of wolf packs, emphasizing the cooperative hunting and strategic guidance of alpha, beta, and delta wolves during the hunting process. This biomimetic design significantly strengthens the algorithm’s dual capabilities of global search and local optimization. When faced with complex camera calibration problems, the WPP algorithm demonstrates superior convergence performance compared to traditional methods, presenting remarkable performance in practical applications and revealing new possibilities in the field of camera calibration.
Additionally, considering the structural characteristics of optical lenses, this paper combines the advantages of the WPP algorithm with the imaging properties of convex lenses, aiming to obtain a more efficient and reliable new approach for optimizing camera calibration parameters. This paper innovatively introduces a reverse learning mechanism based on lens imaging. Specifically, as the WPP algorithm gradually approaches the optimal solution at a certain stage during iterations, we do not stop there but use this “optimal solution” as the starting point for lens imaging simulation. By simulating the refraction and focusing effects of convex lenses on light rays, we construct a virtual imaging process. In this process, the optimal solution is regarded as a light source or object point, and after the transformation effect of the lens, a series of new candidate solutions with expected better performance are formed on the image plane. These candidate solutions are not merely simple variations of the current optimal solution but, shaped by the principles of optical imaging, contain more global information and potential optimization directions. The introduction of these new solutions significantly enriches the search space, providing more exploration paths for the algorithm and enabling the optimization of vision sensor calibration parameters to more effectively escape from local optimal solutions and obtain more accurate and reliable optimization results. A logical diagram illustrating the novelty and advantages of the proposed method is shown in Figure 1.
In summary, the main contributions of this paper are as follows: We innovatively propose the CLI-WPP algorithm, which combines the Wolf Pack Predation biomimetic algorithm with the principles of convex lens imaging. By introducing a reverse learning strategy based on convex lens imaging to improve the algorithm of Wolf Pack Predation, the algorithm’s global optimal solution search capability is further enhanced. In addition, we introduce the CLI-WPP algorithm into the camera calibration algorithm, thus achieving the efficient optimization of camera calibration parameters and improving calibration accuracy. Lastly, the significant effects of the CLI-WPP camera calibration algorithm in enhancing calibration accuracy and efficiency are verified through comparative experiments. This research not only enriches the theoretical system of camera calibration techniques but also provides strong technical support for practical applications.

2. Nonlinear Imaging Model

The basic principle of camera calibration is to accurately convert the coordinates of an object or feature point in 3D space to the corresponding coordinates on a 2D image to obtain various camera parameters. This process involves four critical coordinate systems: the world coordinate system, the camera coordinate system, the image coordinate system, and the pixel coordinate system [12]. The relationship between the four coordinate systems during the mapping process of points in space is shown in Figure 2. In the figure, the world coordinate system is denoted by O w X w Y w Z w , the camera coordinate system is denoted by O C X C Y C Z C , the image coordinate system is denoted by O 1 x y , and the image pixel coordinate system is denoted by O 0 u v .
The first transformation converts world coordinates to camera coordinates. Let the coordinate of a point P in the scene be ( X w ,     Y w ,     Z w ) in the world coordinate system and ( X c ,     Y c ,     Z c ) in the camera coordinate system; the interrelated relationships can be expressed in the form of chi-square coordinates and matrices by rotating the matrix R and translating the vector T to achieve the corresponding transformation, as shown in Equation (1).
X c Y c Z c 1 = R T O T 1 x w y w z w 1 = M 2 x w y w z w 1   ,
In the above formula, R is a 3 × 3-unit orthogonal matrix; T is a 3 × 1 translation vector; O = ( 0 ,     0 ,     0 ) T ; and the parameters R and T constitute the extrinsic parameters of camera calibration, R T O T 1 , forming a 4 × 4 extrinsic matrix.
The second transformation converts camera coordinates to image coordinates [13]. The line between the imaging point P in the camera and the optical center O c of the camera is O c P . The intersection of O c P and the image plane, denoted as p , is the projection of the spatial point P on the image plane. The coordinate of P in the coordinate system of the camera is ( X c ,     Y c ,     Z c ), the coordinate of the point p in the coordinate system of the image is ( x ,     y ) , and the distance between the optical center and the projection point O c O 1 is the focal length of the camera. According to Figure 1, the following Equation (2) can be obtained using the principle of similar triangles:
x X c = f Z c y Y c = f Z c ,
The camera coordinates are converted to image coordinates in the form of chi-square coordinates and matrices, as shown in Equation (3):
Z c x y 1 = f 0 0 f 0 0 0 0 0 0 1 0 X c Y c Z c 1 ,
The third transformation is the conversion of image coordinates to pixel coordinates. The pixel coordinate system [14] u O 0 v is a two-dimensional right-angled coordinate system with the origin O 0 located in the upper left corner of the image, and the u -axis and v -axis intersect at the origin. The origin of the image coordinate system is located at the intersection of the camera’s optical axis and the image plane (called the principal point), i.e., the center of the image O 1 . The x -axis and y -axis are parallel to the u -axis and v -axis, respectively. If the coordinate of O 1 in the pixel coordinate system is u 0 ,     v 0 , and the physical dimensions of each pixel in the image in the x and y directions are d x and d y , respectively, then the relationship between the pixel coordinate ( u ,     v ) and the image coordinate ( x ,     y ) at the point on the image is shown in Equation (4) as follows:
u = x d x + u 0 v = y d y + v 0 ,
The image coordinates are converted to pixel coordinates in the form of homogeneous coordinates and matrices, as shown in Equation (5).
u v 1 = 1 / d x 0 u 0 0 1 / d y v 0 0 0 1 x y 1 ,
In this equation, d x and d y represent the physical sizes of pixels in the x -axis and y -axis directions, respectively, while u 0 ,     v 0 is the coordinate of the principal point, which is also known as the image origin.
By substituting Equations (3) and (5) into Equation (1), the relationship between point p in the image pixel coordinate system and the world coordinate point P can be calculated.
Z c u v 1 = 1 d x 0 u 0 0 1 d y v 0 0 0 1 f 0 0 f 0 0 0 0 0 0 1 0 R t 0 1 x w y w z w 1 = a x 0 0 a y 0 0 u 0 0 v 0 0 1 0 R T 0 T 1 x w y w z w 1 = M 1 M 2 X W = M X W
where a x = f / d x and a y = f / d y are called the scale factors of the u and v axes; they denote the pixel units of the camera in the x-direction and y-direction, respectively. M 1 is the internal parameter matrix of the camera, M 2 is the external parameter matrix of the camera, and M is the projection matrix.
The above formula is the imaging result without any interference. In practice, there is a certain degree of deviation between the actual imaging position of an object or a feature point and the ideal imaging position, and the camera’s nonlinear optical aberration causes this deviation. Figure 3 shows the positional shift due to aberrations.
As can be seen from Figure 3, point P on the world coordinate system has an ideal imaging point p   ( x ,   y ) on the image coordinate system. However, because of camera distortion, the actual imaging position is shifted to p ¯ ( x ¯ , y ¯ ) . To shift the point due to camera aberration as close as possible to the mapped point in the ideal state, the aberration coefficient generated by the camera must be corrected. Zhang Zhengyou’s calibration method only considers radial aberrations, which greatly influence the model, and it does not consider tangential aberrations. The radial distortion coefficient can be derived from the numerical relationship between the real and ideal coordinates; see Equation (7).
Real image coordinates ( x ¯ , y ¯ ) in relation to ideal image coordinates   ( x ,   y ) are determined using the following equation (Taylor expansion):
x ¯ = x + x [ k 1 ( x 2 + y 2 ) + k 2 ( x 2 + y 2 ) 2 ] y ¯ = y + y [ k 1 ( x 2 + y 2 ) + k 2 ( x 2 + y 2 ) 2 ] ,
By substituting Equation (4) into Equation (7), we obtain the relationship between the actual pixel coordinates ( u ¯ ,   v ¯ ) and the ideal pixel coordinates ( u , v ) as follows:
u ¯ = u + u u 0 k 1 x 2 + y 2 + k 2 x 2 + y 2 2 v ¯ = v + v v 0 k 1 x 2 + y 2 + k 2 x 2 + y 2 2 ,
( u u 0 ) ( x 2 + y 2 ) ( v v 0 ) ( x 2 + y 2 )     ( u u 0 ) ( x 2 + y 2 ) 2 ( v v 0 ) ( x 2 + y 2 ) k 1 k 2 = u ¯ u v ¯ v ,
where ( x ¯ ,   y ¯ ) denotes the actual image coordinates after distortion, and ( u ¯ ,   v ¯ ) denotes the actual pixel coordinate after distortion; (x, y) is the image coordinate in the ideal case without distortion, and ( u ,     v ) refers to the corresponding pixel coordinate in the ideal case without distortion; k 1 and k 2 represent the radial distortion coefficients.
In summary, the coordinate change relationship in camera calibration can be summarized as follows: The world coordinates are converted to camera coordinates to determine the external parameters of camera calibration, i.e., translation matrix T and rotation matrix R . Then, the camera coordinates are converted to image coordinates to obtain the focal length   f   of the camera, and the image coordinates are converted to pixel coordinates to determine the coordinates of the camera’s principal point u 0 , v 0 . Lastly, the ideal coordinates are converted to actual coordinates to determine the camera’s distortion coefficients k 1 and k 2 ; thus, camera calibration is completed. The coordinate change relationship is shown in Figure 4.

3. Principle of the Bionic Algorithm of Wolf Pack Predation (WPP)

The bionic algorithm of Wolf Pack Predation (WPP) is a pack intelligence optimization algorithm inspired by the social hierarchical stratification and predation behavior in wolf packs [15]. In wolf society, there is a strict hierarchy in which the α wolf is at the highest rank and is responsible for overall decision-making; the β wolf follows closely behind, assisting the α wolf and directing other members; the δ wolf is at the third rank, carrying out orders and accountable for scouting; and the ω wolf is at the lowest rank and is responsible for balancing the intra-population relationships. In the WPP algorithm, each wolf is regarded as a candidate solution, representing possible solutions to the optimization problem. Among them, the α wolf corresponds to the optimal solution, the β wolf corresponds to the second-best solution, and the δ wolf corresponds to the third-best solution. In contrast, the other wolves are regarded as ω solutions. These groups of wolves collaborate to search for and track their prey, usually with the α wolf leading the β wolf and the δ wolf in a siege. If the prey escapes, the other wolves continue the pursuit until the prey is captured. The WPP algorithm simulates this behavioral pattern of wolf packs and uses the collaborative strategy of wolf packs to solve various optimization problems.
The mathematical model of a wolf pack encircling its prey is as follows:
D = C · X p ( t ) X ( t ) ,
X ( t + 1 ) = X p ( t ) A · D ,
Equation (10) can determine the relative displacement   D between the individual and the prey, and Equation (11) can be used to achieve the localization tracking of the wolf. In the above equation, X and X p   are the position vectors of the wolf individual and prey, respectively, t is the current iteration number, and A and C are called coefficient vectors, which are determined as follows:
A = 2 a · r 1 a ,
C = 2 × r 3 a ,
where the magnitude of r 1 is taken as a random number in the range of [0, 1] [16]; the magnitude of r 3 is taken as a random number in the range of [0.5, 1.5]; a is a convergence factor, and the value of this convergence factor decreases linearly with the number of iterations; decreases linearly from 2; and eventually, the value of this convergence factor will reach 0.
The mathematical model of a wolf attacking its prey is as follows:
X 1 = X α A 1 · C 1 · X α X ,
X 2 = X β A 2 · C 2 · X β X ,
X 3 = X δ A 3 · C 3 · X δ X ,
X ( t + 1 ) = X 1 ( t ) + X 2 ( t ) + X 3 ( t ) 3 ,
where X α , X β , and X δ are the position vectors of α , β , and δ wolves, respectively; A 1 , A 2 , and A 3 are the position vectors of A ; and C 1 , C 2 and C 3 the position vectors of C .

4. CLI-WPP

Addressing the limitation of the bionic algorithm of Wolf Pack Predation in escaping from local optima, we adopted an improved approach. Firstly, a reverse learning strategy based on convex lens imaging technology is introduced to enhance performance. Then, utilizing this strategy, we transform the current optimal individual obtained by the bionic algorithm of Wolf Pack Predation, generating a series of new individuals that are different from the original ones but possess potentially better solutions. These new individuals not only expand the search space but also assist the algorithm in escaping from the confinement of local optima. Ultimately, this strategy enables the algorithm to explore global optimal solutions further.

4.1. The Law of Imaging by a Convex Lens

The law of imaging by a convex lens [17] is a fundamental principle in optics, which states that when an object is placed beyond its focal length, an inverted image will be formed on the other side. This principle is illustrated in Figure 5.
The formula for the imaging of a convex lens can be derived from Figure 5 as follows:
1 m + 1 n = 1 f ,
where m is the object distance, n is the image distance, and f is the focal length of the lens.

4.2. Reverse Learning Strategies for Convex Lens Imaging Techniques

As shown in Figure 6, taking one-dimensional space as an example, the search range of the solution on the x-axis is [ a ,   b ] , and the center of the search range is the y-axis where the convex lens is located. Suppose there is a body P   projected on the x-axis as x * with height h . A real image P can be obtained by imaging through the convex lens, and P   is projected on the x-axis as x * , with height h’. From this, the inverse of the individual x * can be obtained as the individual x * .
In Figure 6, the globally optimal individual x * is used to determine its corresponding inverse point x * with O as the base point, which can be derived from the convex lens imaging principle as follows:
a + b 2 x * x * a + b 2 = h h ,
Let h / h = k . Equation (19) is transformed to obtain the solution formula x * to determine the inverse learning solution using the convex lens imaging method as follows:
x * = a + b 2 + a + b 2 k x * k ,

4.3. Camera Calibration Using CLI-WPP

The bionic algorithm of WPP demonstrates the high speed and high quality of optimality finding when dealing with low-dimensional functions. The α, β, and δ wolf are at a local optimum. A large number of wolves will gather at a particular location. This makes it difficult for the bionic algorithm of WPP to move away from the local optimum. For four-dimensional or more complex search tasks, such as camera calibration, the search efficiency of the WPP algorithm is severely weakened. At the same time, this can also drastically reduce its accuracy.
By leveraging convex lens imaging techniques and adopting a reverse learning strategy, we create a series of new individuals that differ from the original but may outperform it. These new individuals not only broaden the search horizon but also empower the algorithm to escape the confines of local optima and delve deeper into the prospects of a globally optimal solution. As a result, we integrate the inverse learning strategy of convex lens imaging with the bionic algorithm of Wolf Pack Predation (CLI-WPP) to streamline and enhance the calibration process. The implementation steps of this improved WPP algorithm are outlined in Figure 7.
The specific steps are as follows:
Step 1. Read the image.
Step 2. Based on Zhang Zhengyou’s calibration technique, using MATLAB R2016b software, calculate the initial parameters of the camera to be tested, including the internal parameters ( a x ,   a y ,   u 0 ,   v 0 ) and distortion parameters ( k 1 , k 2 ), as well as the external parameters ( R , T ). The pre-calibrated internal parameters and distortion parameter results are used as the initial values for the convex lens imaging reverse learning strategy and the bionic algorithm of Wolf Pack Predation.
Step 3. Initialize the scaling factor k ; the iteration count t ; the maximum iteration count M a x l t e r ; and the values of parameters A , α , and C .
Step 4. Construct the objective function using the average reprojection error as follows:
f X i = F a x , a y , u 0 ,   v 0 , k 1 , k 2 = 1 m i = 1 m u i ¯ u i 2 + v i ¯ v i 2 ,
In the above formula, m is the number of checkerboard corner points, ( u ¯ , v ¯ ) is the image pixel coordinate extracted by the corner detection algorithm, and ( u , v ) is the image pixel coordinates obtained after projecting the 3D points using the camera model. The goal of the optimization is to make the difference (i.e., the reprojection error) between the image pixel coordinates ( u ¯ , v ¯ ) extracted by the corner detection algorithm and the image pixel coordinates ( u ¯ , v ¯ ) obtained by projecting the 3D points using the camera model as small as possible.
Step 5. Record the current iteration count; calculate the fitness value f ( X i ) of each individual; record the fitness values and individual positions of the α wolf, the β wolf, and the δ wolf for this iteration; and update the positions of the current wolves.
Step 6. Apply the convex lens imaging reverse learning strategy to X α based on Equation (20) to generate the reverse solution X α , as shown in Equations (22) and (23).
X α = x 1 , x 2 , , x j ,     j = 1,2 , D ,
x j = a j + b j 2 + a j + b j 2 k x j k ,
In these equations, x j and x j represent the jth dimensional components of x and x , respectively, while a j and b j represent the jth dimensional components of the upper and lower bounds of the decision variables. If f X α < f X α , then the reverse solution X α replaces X α ; otherwise, proceed directly to Step 7.
Step 7. Update the values of parameters A , α , and C .
Step 8. Check if the maximum number of iterations is reached. If satisfied, output the optimal solution X α ; the result corresponding to X α is the required camera calibration parameter. Otherwise, increase the iteration count and return to Step 5.

5. Experiment and Analysis

To validate the performance of the improved algorithm, a camera calibration experiment was conducted using the MATLAB calibration toolbox. The experiment utilized a binocular camera (Brand: Pixel XYZ, model: D-455B, Pixel Leap (Wuhan) Technology Co., Ltd., Wuhan, China) to capture images. During the experiment, the camera was securely mounted on a stable stand to minimize the impact of vibrations on image quality. The chessboard calibration plate image employed a standard 9 × 12 black-and-white grid design, with each grid measuring 30 × 30 mm, to ensure adequate feature points for algorithm recognition. During the photography process, we captured thirty photos featuring the chessboard grid in the same scene, covering various angles and camera positions to test the robustness of our method thoroughly. Although all pictures were taken indoors, the indoor calibration environment did not prevent the application of the calibrated camera from outdoor real-world scenes, as illustrated in Figure 8. Pre-calibration was performed using the MATLAB calibration toolbox, and the calibration results for internal and distortion parameters are listed in Table 1. The calibration results for external parameters are presented in Table 2. The spatial positional relationship between the left and right cameras and each checkerboard image is illustrated in Figure 9.
The internal parameters and distortion parameters for the left and right cameras are a x , a y , u 0 , v 0 , k 1 , a n d   k 2 , and these results were used as the initial values for the CLI-WPP. The histogram of reprojection errors of the left and right cameras after preliminary calibration is shown in Figure 10. The preliminary calibration was accomplished using Zhang’s calibration method, and its reprojection error was relatively high, at 0.42986500. The relatively high reprojection error of 0.42986500 was primarily attributed to the use of all images captured during the experiment, which inevitably included some images with poor quality due to various factors such as occlusions, and fuzzy and indistinct images. These images introduced additional errors into the calibration process, thereby affecting the reprojection error result. To obtain more accurate camera parameters and reduce the reprojection error while ensuring that the number of images meets the requirements, we implemented a rigorous screening of the images. The screening was based on Figure 10, where images with large reprojection errors indicated in the histogram were eliminated, aiming to achieve more precise camera parameters and reprojection errors using Zhang’s calibration method. As listed in Table 5, after screening and recalibration, we achieved a lower reprojection error value of 0.28847656, indicating that the camera parameters were estimated with greater accuracy.
Through several experiments, the smallest average reprojection error values were obtained when the optimized calibrated left and right eyepiece camera parameters were based on the CLI-WPP algorithm, as shown in the data listed in Table 3 and Table 4, respectively.

6. Result Analysis and Discussion

To evaluate the accuracy of the proposed algorithm in this paper, the calibration optimization method (CLI-WPP) was compared and analyzed with camera calibration methods based on simulated annealing (SA), Zhang’s calibration method, the sparrow search algorithm (SSA), particle swarm optimization (PSO), and the bionic algorithm of Wolf Pack Predation (WPP). A controlled variable method was adopted to avoid the uncertainty caused by optimizing too many parameters simultaneously. Here, the camera’s external parameters were fixed, while the internal and distortion parameters were optimized. After obtaining the initial values of the internal parameters and distortion parameters, they were extended in the following manner to create search boundaries for each parameter [18]: [ a x , a y , u 0 , v 0 ] ± 100, [ k 1 ,   k 2 ]∈[−1, 1]. In the experiment, the population size was set to 50, and the maximum number of iterations was 200.
After optimization with the algorithms above, the calibrated parameters for the left camera were determined, which are listed in Table 5. Based on these parameters, the minimum average reprojection error value f e r r was obtained.
Table 5. Optimization results of various algorithms for the left camera’s parameters.
Table 5. Optimization results of various algorithms for the left camera’s parameters.
ParametersSAZhang’sSSA
a x 3322.090003305.658093363.40540
a y 3325.410003323.024363400.48806
u 0 610.020000601.474397578.705639
v 0 443.970000434.541164432.936919
k 1 0.11770000−0.09525949−0.99725691
k 2 1.124400000.32817828−0.29592964
f e r r 0.429865000.288476560.23543161
ParametersPSOWPPCLI-WPP
a x 3303.186723300.958633297.40374
a y 3308.399233318.145063321.59384
u 0 576.638947580.3953895.79399989
v 0 432.069792429.9315694.30634941
k 1 0.41377230−0.05267407−0.01744174
k 2 −1.000000000.5.67810380.38125975
f e r r 0.219342490.106374770.06615037
As seen from the data in Table 5, the mean reprojection error of SA is 0.42986500, the same as the pre-calibration value in MATLAB. The mean reprojection errors of Zhang’s calibration method, the SSA, and PSO are 0.28847656, 0.23543161, and 0.219342495, respectively, all showing a decrease compared to the pre-calibration values. The mean reprojection error of the WPP algorithm is reduced to 0.10637477 after optimization, demonstrating the effectiveness of the WPP algorithm in improving calibration accuracy. When using the CLI-WPP algorithm, the mean reprojection error is significantly reduced to 0.06615037, indicating the superior performance of the CLI-WPP algorithm in camera calibration.
The performance differences between different methods can be visually observed by comparing the reprojection error distributions shown in Figure 11, Figure 12, Figure 13, Figure 14, Figure 15 and Figure 16. Figure 11 represents the calibration results of the SA, which has a mean reprojection error of 0.42986500, the same as the pre-calibration value in MATLAB. By comparing Figure 12 with Figure 11, it can be seen that after removing images with larger reprojection errors, the reprojection error obtained by Zhang’s calibration method is significantly reduced. Comparing Figure 12 with Figure 13 and Figure 14, it can be observed that PSO and the SSA tend to get stuck in local optimal solutions during the optimization process, and the deviations between local optimal solutions are relatively large. When comparing Figure 15 with Figure 11 and Figure 12, it can be seen that the reprojection error distribution after WPP optimization is more concentrated than that of SA and Zhang’s calibration method, indicating a further improvement in calibration accuracy. By comparing Figure 16 with Figure 15, it can be seen that the CLI-WPP algorithm significantly improves optimization performance compared to the WPP algorithm. As shown in Figure 16, the reprojection error distribution density after CLI-WPP optimization is the smallest, demonstrating the high accuracy of its calibrated parameters.
Figure 17 depicts the changes in the objective value during the optimization process of camera calibration for the SA, SSA, PSO, WPP, and CLI-WPP. It can be observed that the camera calibration method based on SA (represented by the green line) has poor iterative ability and remains in its initial state. The camera calibration method based on the SSA (blue line) has a slow iterative speed and falls into a local optimum after the 100th iteration, with no further optimization. The camera calibration method based on PSO (red line) performs well initially but stabilizes after 65 iterations. The calibration method based on the WPP algorithm (black line) stabilizes after approximately 60 iterations. However, the camera calibration method based on the CLI-WPP (dashed line) stabilizes after only 50 iterations, and the CLI-WPP method achieves the most minor reprojection error. The CLI-WPP algorithm converges faster, with a more significant step-down in the optimization value, indicating that it is more effective in escaping local optima during the search process and finding the global optimum faster.
In contrast, the WPP algorithm requires more iterations to reach a stable state, which may suggest that it encounters local optima during the search process. It also requires more iterations to escape these local optimal regions. Through comparative experiments, it is evident that the CLI-WPP algorithm can effectively handle local optima issues, escape local optima, find the global optimum, and achieve significantly improved convergence efficiency while being more robust and reliable. Furthermore, as shown in Figure 17, the average reprojection error obtained with the CLI-WPP algorithm is 0.034 smaller than that obtained with the WPP algorithm, indicating that the CLI-WPP calibration method has superior accuracy.
To evaluate the robustness of the proposed algorithm in this paper, Gaussian noise with a mean of 0 and variances of 0.02, 0.04, 0.06, 0.08, and 0.10 was added to the 30 captured images of a checkerboard calibration board. The simulated annealing (SA) algorithm, Zhang’s calibration method, sparrow search algorithm (SSA), particle swarm optimization (PSO), the bionic algorithm of Wolf Pack Predation (WPP), and the proposed method in this paper were used to conduct camera calibration experiments in turn. The calibration results were averaged over 50 independent experiments for each specified variance value. As shown in Figure 18, a coordinate system was established with different Gaussian noise values on the x-axis and the average reprojection error on the y-axis. It can be intuitively observed that as the noise increases, the calibration accuracy of all six methods decreases. Still, the decline using the proposed method is less than that of the other methods. This indicates that the camera calibration method based on the CLI-WPP exhibits excellent robustness within a specific range of noise variations.

7. Conclusions

Addressing the issues of insufficient accuracy and poor robustness in current standard camera calibration methods, we proposed the CLI-WPP for camera calibration. Firstly, the basic principles of camera calibration, the working principles of WPP, as well as CLI-WPP, were elaborated. By applying the CLI-WPP algorithm to optimize the internal parameters of the camera, high-precision camera calibration was achieved.
To verify the effectiveness of the CLI-WPP algorithm, we conducted a series of experimental comparisons with the SA, SSA, PSO, and WPP methods. The experimental results show that the CLI-WPP-based camera calibration method significantly improves accuracy and robustness compared to other optimization algorithms. By enhancing the accuracy and robustness of camera calibration, the proposed method in this paper provides more effective means and approaches for camera calibration research and practical applications.
While this study has achieved remarkable results in demonstrating the effectiveness of the CLI-WPP algorithm through the critical indicator of reprojection error, we also acknowledge the limitations and room for improvement of this method. Firstly, the current evaluation system primarily relies on the single metric of reprojection error, which may not fully capture the precision and performance of the calibration experiments. To address this limitation, we plan to incorporate more diversified evaluation metrics in our future research, aiming to establish a more comprehensive and robust framework for assessing the precision of calibration experiments.
Furthermore, we recognize that the complexity of application scenes often influences the precision and performance of calibration experiments. Therefore, exploring evaluation methods for calibration experiment precision in different application scenes will be an essential direction for future research. This will help us gain a deeper understanding of the performance of calibration algorithms in practical applications and provide strong support for their optimization.
In summary, we anticipate that future research will introduce more diversified evaluation metrics; explore assessment methods for various application scenes; and continuously optimize the CLI-WPP algorithm to establish a more comprehensive, accurate, and adaptable calibration experiment precision evaluation system. This will further develop camera calibration technology and provide more solid technical support for practical applications in related fields.

Author Contributions

Conceptualization, Q.W., J.M., Z.L. and J.C.; methodology, Q.W., J.M. and Z.L.; software, Z.L.; validation, Q.W. and Z.L.; formal analysis, J.M.; investigation, Q.W.; resources, Q.W.; data curation, Q.W. and Z.L.; writing—original draft preparation, Q.W. and Z.L.; writing—review and editing, J.M. and Z.L.; visualization, Z.L. and J.C.; supervision, J.M.; project administration, Q.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author due to privacy concerns.

Acknowledgments

We would like to thank all the reviewers for the improvement and refinement of the paper.

Conflicts of Interest

Author Qingdong Wu was employed by Shandong Luqiao Group Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Zhu, D.; Cheng, W.; Zhang, Y.; Liu, H. Multi-camera joint calibration algorithm for guiding machining robot positioning considering ambient light and error uncertainty. Opt. Lasers Eng. 2024, 178, 108251. [Google Scholar] [CrossRef]
  2. Liu, J.; He, P.; Hou, X.; Xiao, J.; Wen, L.; Lv, M. A Target-free Vision-based Method for Out-of-plane Vibration Measurement Using Projection Speckle and Camera Self-calibration Technology. Eng. Struct. 2024, 303, 117416. [Google Scholar]
  3. Ge, J.; Deng, Z.; Li, Z.; Liu, T.; Zhuo, R.; Chen, X. Adaptive parameter optimization approach for robotic grinding of weld seam based on laser vision sensor. Robot. Comput.-Integr. Manuf. 2023, 82, 102540. [Google Scholar] [CrossRef]
  4. Che, M.; Duan, Z.; Lan, Z.; Yi, S. Scene Reconstruction Algorithm for Unstructured Weak-Texture Regions Based on Stereo Vision. Appl. Sci. 2023, 13, 6407. [Google Scholar] [CrossRef]
  5. Li, S.; Yoon, H.S. Enhancing Camera Calibration for Traffic Surveillance with an Integrated Approach of Genetic Algorithm and Particle Swarm Optimization. Sensors 2024, 24, 1456. [Google Scholar] [CrossRef]
  6. Fu, W.; Wu, L. Camera Calibration Method Based on Improved Differential Evolution Particle Swarm Optimization. Meas. Control. Technol. 2023, 56, 27–33. [Google Scholar] [CrossRef]
  7. Lu, X.; Tang, X. Visual Localization of Workpieces Based on Evolutionary Algorithm and Its Application in Industrial Robot. In Proceedings of the 2023 3rd International Conference on Intelligent Technologies (CONIT), Hubli, India, 23–25 June 2023; pp. 1–5. [Google Scholar]
  8. Ajjah, H.A.; Alboabidallah, A.H.; Mohammed, M.U.; Hussain, A.A. Modelling and Estimation of Interior Orientation of Non-Metric Cameras using Artificial Intelligence. J. Tech. 2023, 5, 39–45. [Google Scholar] [CrossRef]
  9. Xu, C.; Liu, Y.; Xiao, Y.; Cao, J. Camera Intrinsics Optimization Method Based on Improved Particle Swarm Optimization Algorithm. Laser Optoelectron. Prog. 2020, 57, 041514. [Google Scholar]
  10. To.Sriwong, K.; Paripurana, S.; Vanichchanunt, P. Study of Particle Swarm Optimization Parameter Tuning for Camera Calibration. In Proceedings of the 2023 International Technical Conference on Circuits/Systems, Computers, and Communications (ITC-CSCC), Jeju, Republic of Korea, 25–28 June 2023; pp. 1–6. [Google Scholar]
  11. Shang, H.; Ni, S.; Su, Z. An Optimization Method for Visual Sensor Calibration Based on an Improved Sparrow Search Algorithm. Comput. Technol. Dev. 2023, 3, 146–151, 160. [Google Scholar]
  12. Daviran, M.; Ghezelbash, R.; Maghsoudi, A. WPPKM: A novel hybrid optimization algorithm for geochemical anomaly detection based on grey wolf optimizer and K-means clustering. Geochemistry 2024, 84, 126036. [Google Scholar] [CrossRef]
  13. Zhu, H. Research on Intelligent Driving Vehicle Recognition and Distance Measurement Based on Machine Vision. Electron. Prod. 2021, 8, 49–50, 75. [Google Scholar]
  14. Greer, R.; Gopalakrishnan, A.; Keskar, M.; Trivedi, M. Patterns of vehicle lights: Addressing complexities of camera-based vehicle light datasets and metrics. Pattern Recognit. Lett. 2024, 178, 209–215. [Google Scholar] [CrossRef]
  15. Wang, Y.; Qin, X.; Jia, W.; Lei, J.; Wang, D.; Feng, T.; Zeng, Y.; Song, J. Mult objective Energy Consumption Optimization of a Flying–Walking Power Transmission Line Inspection Robot during Flight Missions Using Improved NSGA-II. Appl. Sci. 2024, 14, 1637. [Google Scholar] [CrossRef]
  16. Khishe, M.; Azar, O.P.; Hashemzadeh, E. Variable-length CNNs evolved by digitized chimp optimization algorithm for deep learning applications. Multimed. Tools Appl. 2024, 83, 2589–2607. [Google Scholar] [CrossRef]
  17. Yu, F.; Guan, J.; Wu, H.; Chen, Y.; Xia, X. Lens imaging opposition-based learning for differential evolution with Cauchy perturbation. Appl. Soft Comput. 2024, 152, 111211. [Google Scholar] [CrossRef]
  18. Ren, J.; Cao, Z. Camera Calibration Optimization Method Based on Improved Wind-Driven Optimization Algorithm. Comput. Eng. Des. 2021, 42, 942–948. [Google Scholar]
Figure 1. The logic diagram showing the novelty and advantages of the proposed method.
Figure 1. The logic diagram showing the novelty and advantages of the proposed method.
Sensors 24 05926 g001
Figure 2. The relationship between the four coordinate systems.
Figure 2. The relationship between the four coordinate systems.
Sensors 24 05926 g002
Figure 3. Position shift phenomenon due to distortion.
Figure 3. Position shift phenomenon due to distortion.
Sensors 24 05926 g003
Figure 4. Relationship of coordinate transformation in camera calibration.
Figure 4. Relationship of coordinate transformation in camera calibration.
Sensors 24 05926 g004
Figure 5. Schematic diagram of convex lens imaging of light.
Figure 5. Schematic diagram of convex lens imaging of light.
Sensors 24 05926 g005
Figure 6. The reverse learning strategy for the convex lens imaging technique.
Figure 6. The reverse learning strategy for the convex lens imaging technique.
Sensors 24 05926 g006
Figure 7. Flowchart of the optimized camera calibration method using the CLI-WPP.
Figure 7. Flowchart of the optimized camera calibration method using the CLI-WPP.
Sensors 24 05926 g007
Figure 8. Calibration plate image acquisition.
Figure 8. Calibration plate image acquisition.
Sensors 24 05926 g008
Figure 9. The 3D model diagram of the external parameters of the camera.
Figure 9. The 3D model diagram of the external parameters of the camera.
Sensors 24 05926 g009
Figure 10. Binocular camera reprojection error graph. The horizontal coordinates of the figure indicate the 30 images used for left and right target timing, and the vertical coordinates indicate the value of the reprojection error.
Figure 10. Binocular camera reprojection error graph. The horizontal coordinates of the figure indicate the 30 images used for left and right target timing, and the vertical coordinates indicate the value of the reprojection error.
Sensors 24 05926 g010
Figure 11. The reprojection error distribution of SA.
Figure 11. The reprojection error distribution of SA.
Sensors 24 05926 g011
Figure 12. The reprojection error distribution of Zhang’s calibration method.
Figure 12. The reprojection error distribution of Zhang’s calibration method.
Sensors 24 05926 g012
Figure 13. The reprojection error distribution of PSO.
Figure 13. The reprojection error distribution of PSO.
Sensors 24 05926 g013
Figure 14. The reprojection error distribution of the SSA.
Figure 14. The reprojection error distribution of the SSA.
Sensors 24 05926 g014
Figure 15. The reprojection error distribution of WPP.
Figure 15. The reprojection error distribution of WPP.
Sensors 24 05926 g015
Figure 16. The reprojection error distribution of the CLI-WPP.
Figure 16. The reprojection error distribution of the CLI-WPP.
Sensors 24 05926 g016
Figure 17. The objective value optimization process.
Figure 17. The objective value optimization process.
Sensors 24 05926 g017
Figure 18. Average calibration results under different levels of noise intensity.
Figure 18. Average calibration results under different levels of noise intensity.
Sensors 24 05926 g018
Table 1. Calibration results for internal parameters and distortion parameters of the left and right cameras.
Table 1. Calibration results for internal parameters and distortion parameters of the left and right cameras.
ParametricLeft CameraRight Camera
a x ( p i x e l ) 3322.090003314.01000
a y ( p i x e l ) 3325.410003314.72000
u 0 ( p i x e l ) 610.020000619.060000
v 0 ( p i x e l ) 443.970000405.520000
k 1 0.117700000.04220000
k 2 1.124400000.41570000
Table 2. Calibration results of the external parameters of the left and right cameras.
Table 2. Calibration results of the external parameters of the left and right cameras.
ParametricResult
Translation Matrix−203.505423−1.123048258.51035436
Rotation Matrix0.999754320.004932370.01674298
0.005134860.999899980.01275354
0.016645420.012945410.99984561
Table 3. The optimized calibration results of the internal parameters of the left camera based on the CLI-WPP algorithm.
Table 3. The optimized calibration results of the internal parameters of the left camera based on the CLI-WPP algorithm.
Calibration ParametersCalibration Results
Focal   Length   [ a x , a y ][3297.40374, 3321.59384]
Principal   Point   [ u 0 , v 0 ][579.399989, 430.634941]
Distortion   Coefficient   [ k 1 , k 2 ][−0.01744174, 0.38125975]
Reprojection Error0.06615037
Table 4. The optimized calibration results of the right camera’s internal parameters based on the CLI-WPP algorithm.
Table 4. The optimized calibration results of the right camera’s internal parameters based on the CLI-WPP algorithm.
Calibration ParametersCalibration Results
Focal   Length   [ a x , a y ][3346.80389, 3344.06814]
Principal   Point   [ u 0 , v 0 ][637.192839, 481.437970]
Distortion   Coefficient   [ k 1 , k 2 ][−0.03579978, 0.50400146]
Reprojection Error0.08240601
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, Q.; Miao, J.; Liu, Z.; Chang, J. Research on the Optimization Method of Visual Sensor Calibration Combining Convex Lens Imaging with the Bionic Algorithm of Wolf Pack Predation. Sensors 2024, 24, 5926. https://doi.org/10.3390/s24185926

AMA Style

Wu Q, Miao J, Liu Z, Chang J. Research on the Optimization Method of Visual Sensor Calibration Combining Convex Lens Imaging with the Bionic Algorithm of Wolf Pack Predation. Sensors. 2024; 24(18):5926. https://doi.org/10.3390/s24185926

Chicago/Turabian Style

Wu, Qingdong, Jijun Miao, Zhaohui Liu, and Jiaxiu Chang. 2024. "Research on the Optimization Method of Visual Sensor Calibration Combining Convex Lens Imaging with the Bionic Algorithm of Wolf Pack Predation" Sensors 24, no. 18: 5926. https://doi.org/10.3390/s24185926

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop