Next Article in Journal
Improving Angle-Only Orbit Determination Accuracy for Earth–Moon Libration Orbits Using a Neural-Network-Based Approach
Previous Article in Journal
Climatology of Polar Stratospheric Clouds Derived from CALIPSO and SLIMCAT
Previous Article in Special Issue
Visual Localization Method for Unmanned Aerial Vehicles in Urban Scenes Based on Shape and Spatial Relationship Matching of Buildings
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Analysis of the Influence of Refraction-Parameter Deviation on Underwater Stereo-Vision Measurement with Flat Refraction Interface

by
Guanqing Li
1,*,
Shengxiang Huang
2,
Zhi Yin
1,3,
Nanshan Zheng
1 and
Kefei Zhang
1
1
School of Environment and Spatial Informatics, China University of Mining and Technology, Xuzhou 221116, China
2
School of Geodesy and Geomatics, Wuhan University, Wuhan 430079, China
3
School of Marine Technology and Geomatics, Jiangsu Ocean University, Lianyungang 222005, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(17), 3286; https://doi.org/10.3390/rs16173286
Submission received: 14 June 2024 / Revised: 1 September 2024 / Accepted: 2 September 2024 / Published: 4 September 2024

Abstract

:
There has been substantial research on multi-medium visual measurement in fields such as underwater three-dimensional reconstruction and underwater structure monitoring. Addressing the issue where traditional air-based visual-measurement models fail due to refraction when light passes through different media, numerous studies have established refraction-imaging models based on the actual geometry of light refraction to compensate for the effects of refraction on cross-media imaging. However, the calibration of refraction parameters inevitably contains errors, leading to deviations in these parameters. To analyze the impact of refraction-parameter deviations on measurements in underwater structure visual navigation, this paper develops a dual-media stereo-vision measurement simulation model and conducts comprehensive simulation experiments. The results indicate that to achieve high-precision underwater-measurement outcomes, the calibration method for refraction parameters, the distribution of the targets in the field of view, and the distance of the target from the camera must all be meticulously designed. These findings provide guidance for the construction of underwater stereo-vision measurement systems, the calibration of refraction parameters, underwater experiments, and practical applications.

1. Introduction

Multi-medium visual measurement is a crucial branch of the visual-measurement field. Common multi-media visual-measurement scenarios include situations where light traverses two mediums, such as air–water, or three mediums, such as air–glass–air and air–glass–water [1,2,3]. Examples of such scenarios include the visual measurement of thermal deformation of alumina ceramic plates and stainless-steel plates under radiant heating [4] as well as the three-dimensional shape measurement of underwater bridge piers [5].
Multi-medium visual measurement has garnered extensive attention in areas such as underwater deformation monitoring and three-dimensional reconstruction [1,5,6,7,8]. In addition, it can also be applied to underwater navigation and positioning. In underwater engineering, such as immersed tunnels, the precise alignment of prefabricated elements requires high-precision underwater navigation and positioning [9]. Similarly, the autonomous recovery of autonomous underwater vehicles (AUVs) necessitates accurate navigation [10]. For navigation scenarios involving the docking of underwater structures at close range, visual measurement becomes a crucial method due to its rich information and high accuracy [11,12].
In underwater visual measurement, cameras are either placed in waterproof housings or positioned above the water surface. Consequently, a significant characteristic is that light propagation undergoes refraction at the interfaces of different media, causing the light rays to deviate from a straight-line path. Therefore, traditional air-based stereo-vision measurement models are no longer applicable under multi-media conditions [13]. When the camera images an object through a sealed housing’s transparent window, the interface typically comes in shapes such as planar, hemispherical, and cylindrical [14,15,16]. Hemispherical ports counteract the refraction effect through their specific shapes, do not reduce the field of view, and can withstand high pressure in deep water, but they may induce image blurring and have higher manufacturing requirements [17,18]. Cylindrical interfaces have the advantage of allowing a larger field of view (in one direction) and are relatively simple and cost-effective to manufacture [19]. Although flat interfaces can significantly reduce the field of view and may introduce chromatic aberration, they are well-studied and have lower manufacturing costs [17]. Therefore, the focus of this paper is on underwater visual measurement with a flat interface.
The basic problems of underwater photogrammetry with a flat interface, including the basic formulas and the methods of automatic reduction to one perspective, were explored decades ago [20,21,22]. Since no later than the 1960s, researchers have proposed numerous methods for refractive compensation in planar underwater visual measurement. Broadly, these methods can be categorized into three solutions. The first approach involves placing two auxiliary calibration grids or a calibration frame within the object space, utilizing the calibration structure to determine the direction of light prior to its incident [23]. However, this method necessitates the customization of specific calibration grids or frames, making the implementation process relatively complex. The second approach entails focal length compensation or refraction absorption, wherein the pixel offset error induced by refraction is approximated as an error stemming from changes in lens focal length or distortion. By calibrating the camera’s lens or distortion parameters, the influence of refraction can be mitigated [24,25,26]. Nevertheless, due to the nonlinearity of the refractive effect and the dependence of the refraction error’s magnitude on the object’s position and the angle of incidence, these methods only approximate the elimination of refraction effects [13,27]. The final method is geometric refraction correction, which involves establishing an underwater refraction imaging model through geometric analysis to explicitly account for the refractive effects, thereby theoretically ensuring the accuracy of the measurement results [8,28,29,30,31,32,33,34,35,36,37].
Research on geometric refraction correction primarily focuses on the development of calibration methods and measurement models. The two-stage underwater camera calibration represents a groundbreaking advancement in the field [29]. The optimization method can also be employed to calibrate parameters such as the normal vector of the refraction interface and the distance from the interface to the camera. However, it is essential to assign appropriate initial values in advance [30]. Leveraging the geometric property that two incident light rays from the same object entering the stereo camera lie on the same plane, the calibration parameters can also be optimized through 3D point remapping [33]. The refractive index varies with different light frequencies. The parameters of the underwater camera can be calibrated by calculating the offset in the imaging position of different light frequencies emitted by a specially designed calibration plate placed at the same location [31]. To eliminate measurement errors caused by the spherical refraction interface, an underwater calibration algorithm based on an advanced non-dominated sorting genetic algorithm is proposed. This approach, utilizing an integrated geometric refraction model, significantly enhances the performance of underwater visual measurement [35].
Regarding the influence of refraction on visual measurements, for multi-view (more than two views) underwater 3D reconstruction, the influence on the accuracy of the 3D reconstruction is evaluated quantitatively and systematically in [25]. Tong [38] used simulation and real experiments to analyze the influence of different refraction parameters and proposed measures to reduce the influence. However, due to the inevitability of measurement errors, the calibrated refraction parameters must contain deviations. When using the refraction-measurement model for refraction compensation, the impact of the parameter deviation on the visual measurement results is also worth studying. In addition, conducting underwater experiments is relatively challenging, so performing relevant analyses through simulation experiments is a preferable option. To the best of our knowledge, current research does not provide specific algorithms for simulation analysis. To investigate the impact of refraction-parameter deviations on stereo-visual measurement in dual-media conditions, this paper first established a simulation model for stereo-visual measurement in the air–water scenario. Then, a thorough analysis of the relationship between the stereo-visual measurement model and refraction-parameter deviations using simulation experiments was conducted. The conclusions can provide guidance for the construction of underwater stereo-vision measurement systems, refraction parameter calibration, underwater experiments, and practical applications.
The subsequent structure of the paper is arranged as follows. Section 2 introduces the light refraction geometry and measurement model for multi-media stereo vision, establishes a simulation algorithm for dual-media stereo-visual measurement and provides a simulation experimental design. The experimental results are presented and discussed in Section 3 and Section 4, respectively. Section 5 presents the conclusions.

2. Methods

2.1. Measurement Model

The stereo-vision measurement system includes left and right cameras. It is assumed that the cameras image the target through a transparent window inside a watertight cabin. The material of the transparent window is generally glass. The light reflected or emitted by the underwater target reaches the camera through three mediums and two refraction interfaces. Both interfaces are assumed to be planar and parallel to each other. The refraction geometry is shown in Figure 1.
The camera coordinate system of the left camera O X Y Z is used as the reference coordinate system. P is the target point underwater and its coordinate is X ,   Y ,   Z . The two refraction interfaces are denoted as π 1 and π 2 . The distance from the left camera optical center to the interface π 1 is D, and the thickness of the glass layer is d. Due to refraction, the optical paths between P and the cameras are not straight lines, but the broken lines PB BA AO and PB R B R A R A R O R according to Snell’s law. The refraction angles of the two rays are denoted by α i and β i . It is assumed that both cameras have been carefully calibrated. The camera coordinate system of the right camera is O R X R Y R Z R , and the coordinates of the origin O R in the reference coordinate system are denoted as t = t x ,   t y ,   t z , which is the translation vector of the right camera with respect to the left camera.
The target point P is the intersection point of the lines BP and B R P . If the equations of two lines are found, the intersection point is the coordinate of P. The derivation of the equations for the lines BP and B R P is detailed below [1].
After obtaining two images from the left and right cameras and completing target recognition and matching, the point P v is obtained using conventional triangulation. The points O, A, and P v are collinear. The points O R , A R , and P v are also collinear. The coordinates of the points O, O R , and P v are known. Therefore, the unit direction vectors of the lines OA and O R A R can be obtained as
ζ 1 = ζ x 1 ,   ζ y 1 ,   ζ z 1 = P v O P v O ξ 1 = ξ x 1 ,   ξ y 1 ,   ξ z 1 = P v O R P v O R
In terms of the direction vector of the line and a point on the line, the equations for the lines OA and O R A R are
X ζ x 1 = Y ζ y 1 = Z ζ z 1 X t x ξ x 1 = Y t y ξ y 1 = Z t z ξ z 1
The points A and A R are the intersections of the lines OA and O R A R with the interface π 1 . Assuming that the normal vector N = N x , N y ,   N z of π 1 is known, the equation for π 1 can be expressed as
N X T + D = 0
Combining Equations (2) and (3), the coordinates of the points A and A R can be obtained as
A = A x , A y , A z = D N ζ 1 T ζ 1 A R = N t T + D N ξ 1 T ξ 1 + t
According to the vector inner product formula, the angles α 1 and β 1 are
α 1 = N · ζ 1 N ζ 1 β 1 = N · ξ 1 N ξ 1
Based on Snell’s law, the angles α 2 and β 2 are
α 2 = sin 1   n 1 · sin   α 1 n 2 β 2 = sin 1   n 1 · sin   β 1 n 2
For the left camera, the lines OA, AB, and the camera optical axis are coplanar, so the normal vector of this plane is
n l = 0 0 1 k = ζ 1 × n l = k 1 k 2 k 3
where n l is the unit direction vector of the left camera optical axis. Through Rodriguez’s formula, the rotation matrix R A and the unit direction vector ζ 2 of the line AB are determined as
R A = I + k × sin   α 2 + k × 2 1 cos   α 2 k × = 0 k 3 k 2 k 3 0 k 1 k 2 k 1 0 ζ 2 = ζ x 2 ,   ζ y 2 ,   ζ z 2 = R A n l T R A n l T
where I is the unit matrix and k × is the antisymmetric matrix generated by k .
Considering the thickness d of the glass layer, we can obtain the coordinates of point B
B = B x , B y , B z = A + d cos   α 2 ζ 2
Because α 2 = α 3 , similar to Equations (6) and (8), the angle α 4 and the unit direction vector of line BP can be calculated as
α 4 = sin 1 n 2 · sin α 2 n 3 R B = I + k × sin   α 4 + k × 2 1 cos   α 4 ζ 3 = ζ x 3 ,   ζ y 3 ,   ζ z 3 = R B n l T R B n l T
For the line BP, the direction vector ζ 3 and a point on the line, B, are known; therefore, its equation is
X B x ζ x 3 = Y B y ζ y 3 = Z B z   ζ z 3
For the right camera, the equation of the line B R P can be obtained following a similar derivation process
X B R x ξ x 3 = Y B R y ξ y 3 = Z B R z ξ z 3
In theory, the intersection of the lines BP and B R P is point P, which is to be determined. However, the lines BP and B R P may not intersect due to the errors. Suppose there exists a point H on the line BP and a point M on the line B R P and the line from H to M forms the vector Q
H = B + s 1 ζ 3 M = B R + s 2 ξ 3 Q = H M
where s 1 and s 2 are the coefficients. Our aim is to minimize the length of Q. The length of the common perpendicular segment of two lines is the smallest. If Q is perpendicular to both lines BP and B R P , we have
Q · ζ 3 = 0 Q · ξ 3 = 0
According to Equation (14), s 1 and s 2 can be calculated, and then H and M can be obtained. The average of H and M is considered to be point P
P = H + M 2
The model is derived for a three-medium scenario, but with minor adjustments, it is equally applicable to a two-medium scenario.

2.2. Dual-Medium Simulation Model

The refraction parameters involved in the above measurement model include the distance from the camera to the refraction interface, the normal direction of the refraction interface, and the refractive indices of different mediums. Calibration errors of these parameters may impact the measurement results. Therefore, it is necessary to assess the impact of refractive parameter deviations on measurement results. However, conducting such evaluations through underwater experiments is relatively complex. In contrast, simulation experiments are more efficient. Consequently, this paper investigates a simulation model for underwater stereo-visual measurement. For simplicity, the camera is placed in the air and the target in the water, constituting an air–water scenario.
Taking the left camera coordinate system as the reference coordinate system. Given the coordinate of the target P is x 1 , y 1 , z 1 ; the distance from the left camera to the refractive interface is D , and the normal vector of the refractive interface is N. The camera is in the air. The target is in the water. The refractive indices of air and water are n 1 and n 2 , respectively.
For the left camera, the refraction geometry is shown in Figure 2. O represents the camera’s optical center. P is the target. C is the refraction point. C′ is the intersection of the line OP with the refractive interface. The objective is to calculate the coordinate of point C.
The direction vector φ of line OP is
φ = P O
The entire refracted light PC-CO is in a refraction plane. Since PC and CO are coplanar with the normal vector N of the refraction interface, the normal vector of the refraction plane is
χ = φ × N φ × N = χ 1 χ 2 χ 3
where × denotes the cross-product. The unit normal vector of the intersection CC′ of the refraction plane and the refractive interface is
ω = χ × N χ × N = ω 1 ω 2 ω 3
Determine the coordinate of a point on line CC′, denoted as x 0 , y 0 , z 0 . Then, the line CC′ can be expressed as
x = x 0 + λ ω 1 y = y 0 + λ ω 2 z = z 0 + λ ω 3
where λ is the coefficient.
According to the relationship between refractive index and the speed of light, if the speed of light in vacuum is c, the speeds of light in the air and water are
v a = c n 1 v w = c n 2
Let the coordinate of point C be x 2 , y 2 , z 2 . The lengths of the two optical paths OC and CP are
l O C = x 2 2 + y 2 2 + z 2 2 l C P = x 1 x 2 2 + y 1 y 2 2 + z 1 z 2 2
According to the relationship between speed, time, and distance, the propagation time of the light between O and P is
t = l O C v a + l C P v w
The refraction point C is a point that satisfies Equation (19). When all refraction parameters are known, Equation (22) is a function of coefficient λ . According to Fermat’s principle, the actual path of light passing between two fixed points in space is always the shortest optical path (or propagation time). So, the derivative of t is equal to 0, which is
f λ = d t d λ = 0
Solving the above formula, we can get the value of λ . Combining it with Equation (19), the coordinate of point C can be obtained. So, the complete propagation path of the light is established. For the right camera, the simulation process is similar, the only difference is that l O C in Equation (21) is transformed into the following form
l O C = x R x 2 2 + y R y 2 2 + z R z 2 2
where x R , y R , z R is the coordinate of the origin of the right camera coordinate system in the reference coordinate system.
The target pixel coordinate can be easily calculated based on the coordinate of point C and the camera parameters using the perspective imaging model, thus realizing the simulation of the imaging process.

2.3. Simulation Experimental Design

The purpose of the simulation experiments is to evaluate the impact of the deviations of the refraction parameters in the measurement model on the results and to provide guidance for the calibration of the refraction parameters and the construction of the measurement system. The cameras are placed in the air and the target in the water, as shown in Figure 3. The intrinsic and external parameters of the left and right cameras are known. The two cameras share the same refraction interface. The refraction parameters that affect the measurement model are the distance D between the left camera and the refraction interface, the medium refractive indices n 1 and n 2 , and the refraction interface normal N. Since the refraction angle of the light depends on the relative refractive index, the relative refractive index n r = n 1 / n 2 is discussed in the simulation experiments.
Take the left camera coordinate system as the reference coordinate system. We simulated a 2.8 × 2.8   m 2 square area and its distribution in the XOY plane of the left camera coordinate system is shown in Figure 4. The Z coordinate of the square area can take different values.
The intrinsic and external parameters of the left and right cameras and the target coordinates are known. The true values of the refraction parameters are set as shown in Table 1. The simulation process is shown in Figure 5. The experiments consist of four steps. (1) In step 1, based on the camera parameters, the refraction parameters without deviations, and the target coordinates, the simulation model proposed in Section 2.2 is used to simulate the coordinate of the refraction point. (2) In step 2, based on the camera parameters and the refraction point coordinate, the corresponding pixel coordinate is obtained using the perspective imaging model. (3) In step 3, based on the camera parameters, the target pixel coordinate, and the refraction parameters with certain deviations, the refractive measurement model described in Section 2.1 is used to estimate the target coordinate. (4) In step 4, the deviation between the estimated and the true coordinates of the target is calculated. The true coordinate of the target is X ,   Y ,   Z , and the estimated coordinate obtained in step 3 is X e ,   Y e , Z e . The coordinate deviations are denoted as d x , d y , and d z , respectively.
d x = X e X d y = Y e Y d z = Z e Z
The total coordinate deviation is given by d p , where d p = d x 2 + d y 2 + d z 2 . Since the pixel coordinate of the target is obtained through the imaging model in step 2, they do not contain any errors. The refraction parameters used in step 3 are added with a certain amount of deviation, with the requirement that the deviation is added to only one parameter at a time. Three experiments were conducted. Experiment 1 discussed the sensitivity of the measurement model to the refraction parameters. Experiment 2 analyzed the influence of the fixed refraction-parameter deviation on the targets at different distances. Experiment 3 studied the change in the measurement results of fixed targets with the refraction parameter.

3. Results

3.1. Experiment 1

The aim of this experiment is to analyze the sensitivity of the dual-medium visual measurement model to deviations in refraction parameters. All targets are on the same plane with a Z -distance of 5 m as shown in Figure 4. After intentionally introducing deviations in the refraction parameters, the measured coordinates of the targets will differ from their true values, resulting in coordinate discrepancies. Sequentially introduce deviations in different refraction parameters until the maximum d p reaches 1 cm.
First, analyze the impact of deviation in the relative refractive index. The true value of the relative refractive index is n r . When the relative refractive index changes to 0.99843 n r , the maximum d p is 10 mm, as shown in Figure 6. In this case, the discrepancies in X and Y coordinate components are minimal, less than 0.5 mm, and d p primarily depends on the discrepancy in Z coordinate component. Relative refractive index deviation is more likely to cause Z coordinate deviation. The maximum coordinate discrepancy occurs at the four corners in Figure 4.
Next, analyze the impact of deviation in the distance from the camera to the refractive interface. The true value of this distance is denoted as D. When the distance changes to 0.805D, the maximum d p is 10 mm, as shown in Figure 7. The deviation in D has a minimal impact on X and Y coordinate components of the target, less than 1 mm. It is more likely to cause discrepancies in the Z coordinate component, with d p primarily depending on the discrepancy in the Z coordinate component. The maximum coordinate discrepancy occurs at the four corners in Figure 4.
Finally, analyze the impact of deviation in the normal direction of the refractive surface. The theoretical direction of the refractive surface normal is parallel to the optical axis of the left camera. The normal direction deviation is represented by the simultaneous rotation of its three attitude angles by the same angle. When the rotation angle is 0.195°, the maximum d p is 10 mm, as shown in Figure 8. Deviations in the normal direction have a significant impact on all three coordinate components of the targets, especially the Y and Z coordinates. The extent of the impact varies across different points in the target area; points closer to the center are less affected, while points closer to the four corners are more affected.
Combining Figure 6, Figure 7 and Figure 8, it is evident that the dual-medium stereo-vision measurement model is highly sensitive to deviations in the relative refractive index. Even a deviation of 0.157% can result in a maximum comprehensive coordinate deviation of up to 10 mm in the target area. In contrast, the measurement model is less sensitive to deviations in the distance from the camera to the refractive interface and the normal direction of the refractive interface. This indicates that in underwater experiments or practical applications, special attention must be given to the calibration of the refractive indices of different mediums.

3.2. Experiment 2

In Experiment 1, we fixed the target area at Z = 5   m . However, with the same deviations in refraction parameters, the coordinate deviations of the target might differ if the distance between the target and the camera varies. Four different Z values were set at intervals of 2 m, ranging from 2 m to 8 m. Deviations were sequentially introduced in different refraction parameters: the relative refractive index changed to 0.99843 n r , the distance from the camera to the refractive interface changed to 0.805D, and the rotation angle of the refractive surface normal was 0.195°.

3.2.1. Target Plane

When the relative refractive index is 0.99843 n r , the coordinate discrepancies of the three components in the target area are shown in Figure 9, Figure 10 and Figure 11. For different Z values, the discrepancies in X and Y coordinate components remain relatively small. However, the smaller the Z value, the relatively larger the d x and d y discrepancies. At Z = 2   m , the maximum values of d x and d y are approximately ±2 mm and ±1 mm, respectively. The discrepancy in Z coordinate component, d z , exhibits a significantly different variation pattern. The larger the Z value, the larger the d z . At Z = 2   m , d z is the smallest, with a maximum value of approximately 7.4 mm in the edge of the target area; at Z = 8 m, d z is the largest, with both the maximum and minimum values of d z in the target area being approximately 14 mm. Since d z is significantly larger than d x and d y , when the relative refractive index deviation is constant, the closer the target is to the camera, the smaller the resulting d p . The closer the target is to the center of the region, the smaller its d p is.
When the distance from the camera to the refractive interface is 0.805D, the coordinate discrepancies of the three components in the target area are shown in Figure 12, Figure 13 and Figure 14. When Z is greater than 4 m, as Z value increases, d x , d y , and d z gradually decrease. At Z = 2   m , the maximum values of d x and d y are approximately ±18 mm, and the maximum value of d z is approximately 54 mm, occurring at the four corners of the area. Overall, the farther the target plane is from the camera, the smaller the values of d x , d y , and d z , resulting in a smaller d p . Additionally, the closer the target is to the center of the area, the smaller the d p .
When the three attitude angles of the refractive interface normal all change by 0.195°, the coordinate discrepancies of the three components in the target area are shown in Figure 15, Figure 16 and Figure 17. As the Z value increases from 2 m to 8 m, d x changes from positive to negative, and its absolute value generally increases. At Z = 2 m, there is a noticeable variation in d x among different points in the target area, with d x decreasing towards the center of the area. As the Z value increases, the variation in d x among different points in the target area gradually diminishes. At Z = 8 m, d x reaches −7 mm. The d y component exhibits a similar trend with changes in Z ; the larger the Z value, the greater the absolute value of d y . At Z = 2 m, there is a noticeable variation in d y among different points in the target area, with d y decreasing towards the center of the area. The relationship between d z and Z is relatively less pronounced. As Z increases from 2 m to 8 m, d z slightly decreases. However, there are noticeable differences in d z among different points in the target area.

3.2.2. Fixed Points

To further analyze the relationship between the coordinate deviations of the target and Z value, we selected the four corners of the target area shown in Figure 4 as feature points for the experiment. The Z values range from 0.5 m to 8 m. Based on the previous analysis, we know that when Z value is fixed, these four points have the largest d p within the entire target area. We sequentially analyzed different refraction parameters, maintaining the same deviation magnitudes as discussed earlier.
When the relative refractive index is 0.99843 n r , the coordinate discrepancies of the four points are shown in Figure 18. As Z increases, d x and d y exhibit similar trends: both initially increase rapidly, then decrease, with the rate of decrease being fast at first and then slowing down. The maximum values of d x and d y occur around Z = 2.2 m, about ±1 mm. For d x , the trends for P 1 and P 3 , as well as P 2 and P 4 , are completely consistent. For d y , the trends for P 1 and P 2 , as well as P 3 and P 4 , are very similar. d z increases with increasing Z . Since d z is much larger than d x and d y , d p mainly depends on the magnitude of d z . The trends for d z and d p are consistent for all four points.
When the distance from the camera to the refractive interface is 0.805 D , the coordinate discrepancies of the four points are shown in Figure 19. Similar to Figure 18, as Z increases, d x and d y exhibit similar trends: both initially increase rapidly, then decrease, with the rate of decrease being fast at first and then slowing down. However, the maximum values of d x and d y occur around Z = 1.7   m , reaching approximately ±30 mm. For d x , the trends for P 1 and P 3 , as well as for P 2 and P 4 , are consistent. For d y , the trends for P 1 and P 2 , as well as for P 3 and P 4 , are similar. d z decreases with increasing Z , with the rate of decrease being fast at first and then slowing down, with an inflection point around Z = 3   m . Although the values of d x and d y around Z = 1.7   m exceed ±20 mm, they are still an order of magnitude smaller than d z . Therefore, d p mainly depends on the magnitude of d z . The trends for d z and d p are consistent for all four points; as Z value increases, d p decreases.
When the three attitude angles of the refraction interface normal change by 0.195°, the coordinate component deviations of the four points are shown in Figure 20. For Z less than 3 m, the variation trends of the three coordinate component deviations are quite complex. The variation trends of d z differ significantly among different points. For points P 2 and P 3 , d z is significantly larger than d x and d y . In contrast, for points P 1 and P 4 , the deviations in all three coordinate components are relatively small. When Z is greater than 3 m, d x and d y generally increase with increasing Z , while d z shows no obvious variation. Additionally, d x and d y are similar in magnitude and significantly larger than d z . For points P 2 and P 3 , when Z is small, the deviation d p is mainly determined by d z . However, when Z is large, d p is primarily influenced by d x and d y . For points P 1 and P 4 , d p is affected by all three coordinate components, and as Z increases, the influence of d x and d y becomes more significant.

3.3. Experiment 3

In the aforementioned experiments, we fixed the deviation of the refraction parameters. However, as the deviations of the refraction parameters vary, the target-coordinate deviations also change accordingly. Therefore, we conducted this experiment to investigate the relationship between target-coordinate deviations and refraction-parameter deviations. In this experiment, we continued to select the four feature points shown in Figure 4 as targets and set Z to 5 m. We then sequentially introduced deviation sequences into different refraction parameters.
The relationship between the deviations of the target coordinate and the three refraction-parameter deviations—relative refractive index, the distance from the camera to the refraction interface, and the direction of the refraction interface normal—is shown in Figure 21, Figure 22 and Figure 23. d x , d y , and d z all exhibit a linear relationship with the refraction-parameter deviations. The d p plot displays a symmetric distribution with the true value of the refraction parameters as the axis of symmetry, indicating that both positive and negative deviations in the refraction parameters have the same effect on d p . The smaller the deviation in the refraction parameters, the closer d x , d y , d z , and d p are to zero.

4. Discussion

When the cameras are in the air and the target is in the water, the light path from the target to the camera undergoes refraction, causing the light to deviate from a straight line. In order to compensate for refraction in stereo-vision measurement, a theoretically rigorous approach is to use the refraction-measurement model. However, the refraction-measurement model involves refraction parameters such as the refractive indices of the medium, the distance from the camera to the refractive interface, and the normal of the refractive interface. These parameters generally require calibration. Due to the presence of errors, the calibrated refraction parameters inevitably differ from their true values. The experimental results in Section 3 demonstrate the impact of deviations in refraction parameters on stereo-vision measurement.
The impact of different refraction-parameter deviations on stereo-vision measurement varies, and the magnitude of this impact is related to the distribution of the target within the plane. Deviations in the relative refractive index and the distance between the target and the camera are more likely to cause discrepancies in the Z coordinate, while their effect on the X and Y coordinates is relatively minor. The total coordinate deviation of the target is primarily determined by the Z-coordinate deviation, and the closer the target is to the edge of the field of view, the greater its coordinate deviation. Deviations in the orientation of the refractive interface normal significantly affect all three coordinate components of the target. Regardless of the type of refraction-parameter deviation, the overall trend shows that the closer the target is to the edge of the field of view, the greater the coordinate deviation. A deviation of 0.157% in the relative refractive index can result in a maximum target-coordinate deviation of up to 1 cm, indicating that the stereo-visual measurement model is highly sensitive to deviations in the relative refractive index in the air–water condition. In underwater experiments or practical applications, special attention must be paid to the calibration of refractive indices of different media.
The impact of refraction-parameter deviations on stereo-visual measurement is related to the distance between the target and the camera, and the influence of different refraction parameters varies with the distance. When the deviation in the relative refractive index is constant, the closer the target is to the camera, the smaller the total coordinate deviation of the targets. When the deviation in the distance between the camera and the refractive interface is constant, as the distance between the target and the camera increases, the deviations in X and Y coordinates first increase rapidly, then decrease rapidly, and finally stabilize, while the deviation in Z coordinate decreases rapidly and gradually stabilizes. The deviation in the Z coordinate is an order of magnitude greater than that in the X and Y coordinates, so the overall coordinate deviation is determined by the Z coordinate deviation. For points at different positions within the field of view, the overall coordinate deviations caused by the deviations in the relative refractive index and the camera-refractive interface distance with the changes in the Z coordinate are approximately consistent. The impact of deviations in the orientation of the refractive interface normal varies more complexly with changes in the Z coordinate and is related to the distribution of the target. In underwater experiments or practical applications to mitigate the impact of deviations in the orientation of the refractive interface normal, attention must be paid to the distribution of the target within the field of view.
For the fixed targets, changes in the refraction parameters deviations result in linear variations in the X, Y, Z, and overall coordinate deviations. The larger the deviation in the refraction parameters, the greater the coordinate deviations. Whether the refraction parameter is greater or less than its true value, the impact on the overall coordinate deviation remains the same.
According to the experimental results, targets closer to the edge of the field of view are more susceptible to the influence of refraction-parameter deviations. This indicates that the current refraction-measurement model performs better for targets near the center of the field of view, while it may not fully eliminate refraction errors for targets near the edges.

5. Conclusions

This paper introduces a refraction-based visual-measurement model, establishes a stereo-vision simulation model, and conducts specific simulation experiments to study in detail the impact of deviations in various refraction parameters on stereo-visual measurement. The simulation accurately determines the coordinates of the refraction points based on the positions of the cameras and the targets, thereby obtaining the corresponding pixel coordinates used to estimate the targets’ position. Experimental results demonstrate that the influence of different refraction-parameter deviations on stereo-visual measurement varies and is related to the distribution of the target within the field of view and the distance between the target and the camera. The overall coordinate deviation of the target is not necessarily minimized by simply increasing or decreasing the distance between the target and the camera; it must be determined based on the magnitude of the deviation in each refraction parameter. The stereo-visual measurement model is particularly sensitive to deviations in the relative refractive index in the air–water condition. Therefore, in underwater experiments or practical applications, careful calibration of the refractive indices of different media and attention to the distribution of targets within the field of view are crucial. The work and conclusions presented in this paper provide guidance for the construction of underwater stereo-vision measurement systems, refraction parameter calibration, underwater experiments, and practical applications. However, this study primarily conducted research by developing a simulation model and performing simulated experiments. Future work should include underwater experiments to validate the conclusions of this paper. In addition, the current refraction-measurement model is more suitable for targets located near the center of the field of view, whereas its effectiveness for targets at the edges is less than ideal, indicating the need for further optimization of the refraction-measurement model in the future.

Author Contributions

Conceptualization, G.L. and S.H.; Formal analysis, G.L.; Funding acquisition, G.L.; Investigation, G.L.; Methodology, G.L.; Project administration, N.Z.; Resources, G.L., Z.Y., S.H., N.Z. and K.Z.; Software, G.L.; Supervision, K.Z.; Validation, Z.Y., N.Z. and K.Z.; Visualization, G.L.; Writing—original draft, G.L.; Writing—review and editing, S.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Fundamental Research Funds for the Central Universities, grant number 2022QN1081.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Su, Z.; Pan, J.; Lu, L.; Dai, M.; He, X.; Zhang, D. Refractive three-dimensional reconstruction for underwater stereo digital image correlation. Opt. Express 2021, 29, 12131–12144. [Google Scholar] [CrossRef]
  2. Ding, T.; Sun, C.; Chen, J. Cross-medium imaging model and calibration method based on refractive optical path for underwater morphology measurement. Meas. Sci. Technol. 2024, 35, 15205. [Google Scholar] [CrossRef]
  3. Łuczyński, T.; Pfingsthorn, M.; Birk, A. The Pinax-model for accurate and efficient refraction correction of underwater cameras in flat-pane housings. Ocean. Eng. 2017, 133, 9–22. [Google Scholar] [CrossRef]
  4. Chi, Y.; Yu, L.; Pan, B. Low-cost, portable, robust and high-resolution single-camera stereo-DIC system and its application in high-temperature deformation measurements. Opt. Lasers Eng. 2018, 104, 141–148. [Google Scholar] [CrossRef]
  5. Wu, T.; Hou, S.; Sun, W.; Shi, J.; Yang, F.; Zhang, J.; Wu, G.; He, X. Visual measurement method for three-dimensional shape of underwater bridge piers considering multirefraction correction. Autom. Constr. 2023, 146, 104706. [Google Scholar] [CrossRef]
  6. Beall, C.; Lawrence, B.J.; Ila, V.; Dellaert, F. 3D reconstruction of underwater structures. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems IEEE, Taipei, Taiwan, 18–22 October 2010; pp. 4418–4423. [Google Scholar]
  7. Bruno, F.; Bianco, G.; Muzzupappa, M.; Barone, S.; Razionale, A.V. Experimentation of structured light and stereo vision for underwater 3D reconstruction. ISPRS J. Photogramm. Remote Sens. 2011, 66, 508–518. [Google Scholar] [CrossRef]
  8. Chadebecq, F.; Vasconcelos, F.; Lacher, R.; Maneas, E.; Desjardins, A.; Ourselin, S.; Vercauteren, T.; Stoyanov, D. Refractive Two-View Reconstruction for Underwater 3D Vision. Int. J. Comput. Vis. 2020, 128, 1101–1117. [Google Scholar] [CrossRef]
  9. Li, G.; Klingbeil, L.; Zimmermann, F.; Huang, S.; Kuhlmann, H. An Integrated Positioning and Attitude Determination System for Immersed Tunnel Elements: A Simulation Study. Sensors 2020, 20, 7296. [Google Scholar] [CrossRef] [PubMed]
  10. Liu, S.; Xu, H.; Lin, Y.; Gao, L. Visual Navigation for Recovering an AUV by Another AUV in Shallow Water. Sensors 2019, 19, 1889. [Google Scholar] [CrossRef] [PubMed]
  11. Cowen, S.; Briest, S.; Dombrowski, J. Underwater docking of autonomous undersea vehicles using optical terminal guidance. In Proceedings of the Oceans ’97. MTS/IEEE Conference, Halifax, NS, Canada, 6–9 October 1997; Volume 2, pp. 1143–1147. [Google Scholar]
  12. Liu, S.; Ozay, M.; Okatani, T.; Xu, H.; Sun, K.; Lin, Y. Detection and Pose Estimation for Short-Range Vision-Based Underwater Docking. IEEE Access 2019, 7, 2720–2749. [Google Scholar] [CrossRef]
  13. Treibitz, T.; Schechner, Y.; Kunz, C.; Singh, H. Flat Refractive Geometry. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 51–65. [Google Scholar] [CrossRef]
  14. Treibitz, T.; Schechner, Y.Y. Active Polarization Descattering. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 31, 385–399. [Google Scholar] [CrossRef] [PubMed]
  15. Schechner, Y.Y.; Karpel, N. Recovery of Underwater Visibility and Structure by Polarization Analysis. IEEE J. Ocean. Eng. 2005, 30, 570–587. [Google Scholar] [CrossRef]
  16. Yamashita, A.; Kawanishi, R.; Koketsu, T.; Kaneko, T.; Asama, H. Underwater sensing with omni-directional stereo camera. In Proceedings of the 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), Barcelona, Spain, 6–13 November 2011; pp. 304–311. [Google Scholar]
  17. Menna, F.; Nocerino, E.; Fassi, F.; Remondino, F. Geometric and Optic Characterization of a Hemispherical Dome Port for Underwater Photogrammetry. Sensors 2016, 16, 48. [Google Scholar] [CrossRef] [PubMed]
  18. She, M.; Nakath, D.; Song, Y.; Köser, K. Refractive geometry for underwater domes. ISPRS J. Photogramm. Remote Sens. 2022, 183, 525–540. [Google Scholar] [CrossRef]
  19. Bosch, J.; Gracias, N.; Ridao, P.; Ribas, D. Omnidirectional Underwater Camera Design and Calibration. Sensors 2015, 15, 6033–6065. [Google Scholar] [CrossRef]
  20. Shmutter, B. Orientation Problems in Two-Medium Photogrammetry. Photogramm. Eng. 1967, 33, 1421–1428. [Google Scholar]
  21. Rinner, K. Problems of Two-Medium Photogrammetry. Photogramm. Eng. 1969, 35, 275–282. [Google Scholar]
  22. Masry, S.E.; Konecny, G. New Programs for the Analytical Plotter. Photogramm. Eng. 1970, 36, 1269–1276. [Google Scholar]
  23. Kwon, Y.; Casebolt, J.B. Effects of light refraction on the accuracy of camera calibration and reconstruction in underwater motion analysis. Sports Biomech. 2006, 5, 95–120. [Google Scholar] [CrossRef]
  24. Fabio, M.; Erica, N.; Salvatore, T.; Fabio, R. A photogrammetric approach to survey floating and semi-submerged objects. In Proceedings of the Videometrics, Range Imaging, and Applications XII, and Automated Visual Inspection, Munich, Germany, 13–16 May 2013. [Google Scholar]
  25. Kang, L.; Wu, L.; Yang, Y. Experimental study of the influence of refraction on underwater three-dimensional reconstruction using the SVP camera model. Appl. Opt. 2012, 51, 7591–7603. [Google Scholar] [CrossRef] [PubMed]
  26. Lavest, J.M.; Rives, G.; Laprest, J.T. Dry camera calibration for underwater applications. Mach. Vision. Appl. 2003, 13, 245–253. [Google Scholar] [CrossRef]
  27. Kang, L.; Wu, L.; Wei, Y.; Lao, S.; Yang, Y. Two-view underwater 3D reconstruction for cameras with unknown poses under flat refractive interfaces. Pattern Recognit. 2017, 69, 251–269. [Google Scholar] [CrossRef]
  28. Agrawal, A.; Ramalingam, S.; Taguchi, Y.; Chari, V. A theory of multi-layer flat refractive geometry. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA, 16–21 June 2012. [Google Scholar]
  29. Li, R.; Li, H.; Zou, W.; Smith, R.G.; Curran, T.A. Quantitative photogrammetric analysis of digital underwater video imagery. IEEE J. Ocean. Eng. 1997, 22, 364–375. [Google Scholar] [CrossRef]
  30. Jordt-Sedlazeck, A.; Koch, R. Refractive Structure-from-Motion on Underwater Images. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Sydney, NSW, Australia, 1–8 December 2013. [Google Scholar]
  31. Yau, T.; Gong, M.; Yang, Y. Underwater Camera Calibration Using Wavelength Triangulation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Portland, OR, USA, 23–28 June 2013. [Google Scholar]
  32. Telem, G.; Filin, S. Photogrammetric modeling of underwater environments. ISPRS J. Photogramm. Remote Sens. 2010, 65, 433–444. [Google Scholar] [CrossRef]
  33. Chen, X.; Yang, Y.H. Two-View Camera Housing Parameters Calibration for Multi-layer Flat Refractive Interface. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014. [Google Scholar]
  34. Dolereit, T.; von Lukas, U.F.; Kuijper, A. Underwater stereo calibration utilizing virtual object points. In Proceedings of the Oceans 2015, Genova, Italy, 18–21 May 2015. [Google Scholar]
  35. Qiu, C.; Wu, Z.; Kong, S.; Yu, J. An Underwater Micro Cable-Driven Pan-Tilt Binocular Vision System with Spherical Refraction Calibration. IEEE Trans. Instrum. Meas. 2021, 70, 1–13. [Google Scholar] [CrossRef]
  36. Qi, G.; Shi, Z.; Hu, Y.; Fan, H.; Dong, J. Refraction calibration of housing parameters for a flat-port underwater camera. Opt. Eng. 2022, 61, 104105. [Google Scholar] [CrossRef]
  37. Ma, Y.; Zhou, Y.; Wang, C.; Wu, Y.; Zou, Y.; Zhang, S. Calibration of an underwater binocular vision system based on the refraction model. Appl. Opt. 2022, 61, 1675–1686. [Google Scholar] [CrossRef]
  38. Tong, Z.; Gu, L.; Shao, X. Refraction error analysis in stereo vision for system parameters optimization. Measurement 2023, 222, 113650. [Google Scholar] [CrossRef]
Figure 1. Refractive geometry for air–glass–water medium with two flat interfaces.
Figure 1. Refractive geometry for air–glass–water medium with two flat interfaces.
Remotesensing 16 03286 g001
Figure 2. Refracted light in the dual-medium scenario.
Figure 2. Refracted light in the dual-medium scenario.
Remotesensing 16 03286 g002
Figure 3. Air–water dual-medium visual-measurement scenario with one flat interface.
Figure 3. Air–water dual-medium visual-measurement scenario with one flat interface.
Remotesensing 16 03286 g003
Figure 4. Target area and four feature points.
Figure 4. Target area and four feature points.
Remotesensing 16 03286 g004
Figure 5. Simulation process.
Figure 5. Simulation process.
Remotesensing 16 03286 g005
Figure 6. Coordinate deviation of the target area at Z = 5   m when the relative refraction is 0.99843 n r . max   d p = 10   mm .
Figure 6. Coordinate deviation of the target area at Z = 5   m when the relative refraction is 0.99843 n r . max   d p = 10   mm .
Remotesensing 16 03286 g006
Figure 7. Coordinate deviation of the target area at Z = 5   m when the distance is 0.805 D . max   d p = 10   mm .
Figure 7. Coordinate deviation of the target area at Z = 5   m when the distance is 0.805 D . max   d p = 10   mm .
Remotesensing 16 03286 g007
Figure 8. Coordinate deviation of the target area at Z = 5   m when the rotation angle of the normal direction of the refractive surface is 0.195°. max   d p = 10   mm .
Figure 8. Coordinate deviation of the target area at Z = 5   m when the rotation angle of the normal direction of the refractive surface is 0.195°. max   d p = 10   mm .
Remotesensing 16 03286 g008
Figure 9. X coordinate deviation of the target area at different distances from the camera when the relative refraction is 0.99843 n r .
Figure 9. X coordinate deviation of the target area at different distances from the camera when the relative refraction is 0.99843 n r .
Remotesensing 16 03286 g009
Figure 10. Y coordinate deviation of the target area at different distances from the camera when the relative refraction is 0.99843 n r .
Figure 10. Y coordinate deviation of the target area at different distances from the camera when the relative refraction is 0.99843 n r .
Remotesensing 16 03286 g010
Figure 11. Z coordinate deviation of the target area at different distances from the camera when the relative refraction is 0.99843 n r .
Figure 11. Z coordinate deviation of the target area at different distances from the camera when the relative refraction is 0.99843 n r .
Remotesensing 16 03286 g011
Figure 12. X coordinate deviation of the target area at different distances from the camera when the distance is 0.805 D .
Figure 12. X coordinate deviation of the target area at different distances from the camera when the distance is 0.805 D .
Remotesensing 16 03286 g012
Figure 13. Y coordinate deviation of the target area at different distances from the camera when the distance is 0.805 D .
Figure 13. Y coordinate deviation of the target area at different distances from the camera when the distance is 0.805 D .
Remotesensing 16 03286 g013
Figure 14. Z coordinate deviation of the target area at different distances from the camera when the distance is 0.805 D .
Figure 14. Z coordinate deviation of the target area at different distances from the camera when the distance is 0.805 D .
Remotesensing 16 03286 g014
Figure 15. X coordinate deviation of the target area at different distances from the camera when the rotation angle of the normal direction of the refractive surface is 0.195°.
Figure 15. X coordinate deviation of the target area at different distances from the camera when the rotation angle of the normal direction of the refractive surface is 0.195°.
Remotesensing 16 03286 g015
Figure 16. Y coordinate deviation of the target area at different distances from the camera when the rotation angle of the normal direction of the refractive surface is 0.195°.
Figure 16. Y coordinate deviation of the target area at different distances from the camera when the rotation angle of the normal direction of the refractive surface is 0.195°.
Remotesensing 16 03286 g016
Figure 17. Z coordinate deviation of the target area at different distances from the camera when the rotation angle of the normal direction of the refractive surface is 0.195°.
Figure 17. Z coordinate deviation of the target area at different distances from the camera when the rotation angle of the normal direction of the refractive surface is 0.195°.
Remotesensing 16 03286 g017
Figure 18. Relationship between the coordinate components of the four target points and Z when the relative refraction is 0.99843 n r .
Figure 18. Relationship between the coordinate components of the four target points and Z when the relative refraction is 0.99843 n r .
Remotesensing 16 03286 g018
Figure 19. Relationship between the coordinate components of the four target points and Z when the distance is 0.805 D .
Figure 19. Relationship between the coordinate components of the four target points and Z when the distance is 0.805 D .
Remotesensing 16 03286 g019
Figure 20. Relationship between the coordinate components of the four target points and Z when the rotation angle of the normal direction of the refractive surface is 0.195°.
Figure 20. Relationship between the coordinate components of the four target points and Z when the rotation angle of the normal direction of the refractive surface is 0.195°.
Remotesensing 16 03286 g020
Figure 21. Relationship between target-coordinate deviation and relative refractive index deviation.
Figure 21. Relationship between target-coordinate deviation and relative refractive index deviation.
Remotesensing 16 03286 g021
Figure 22. Relationship between target-coordinate deviation and camera-to-refractive-interface distance deviation.
Figure 22. Relationship between target-coordinate deviation and camera-to-refractive-interface distance deviation.
Remotesensing 16 03286 g022
Figure 23. Relationship between target-coordinate deviation and normal direction of refraction interface.
Figure 23. Relationship between target-coordinate deviation and normal direction of refraction interface.
Remotesensing 16 03286 g023
Table 1. True values of the refraction parameters.
Table 1. True values of the refraction parameters.
D n r N
10 cm n 1 n 2 = 1 1.33 0 , 0 , 1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, G.; Huang, S.; Yin, Z.; Zheng, N.; Zhang, K. Analysis of the Influence of Refraction-Parameter Deviation on Underwater Stereo-Vision Measurement with Flat Refraction Interface. Remote Sens. 2024, 16, 3286. https://doi.org/10.3390/rs16173286

AMA Style

Li G, Huang S, Yin Z, Zheng N, Zhang K. Analysis of the Influence of Refraction-Parameter Deviation on Underwater Stereo-Vision Measurement with Flat Refraction Interface. Remote Sensing. 2024; 16(17):3286. https://doi.org/10.3390/rs16173286

Chicago/Turabian Style

Li, Guanqing, Shengxiang Huang, Zhi Yin, Nanshan Zheng, and Kefei Zhang. 2024. "Analysis of the Influence of Refraction-Parameter Deviation on Underwater Stereo-Vision Measurement with Flat Refraction Interface" Remote Sensing 16, no. 17: 3286. https://doi.org/10.3390/rs16173286

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop