Next Article in Journal
Multi-Objective Swarm Intelligence Trajectory Generation for a 7 Degree of Freedom Robotic Manipulator
Previous Article in Journal
OntoSLAM: An Ontology for Representing Location and Simultaneous Mapping Information for Autonomous Robots
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Laser Vision System for Relative 3-D Posture Estimation of an Underwater Vehicle with Hemispherical Optics

by
Christos C. Constantinou
1,2,*,
George P. Georgiades
2 and
Savvas G. Loizou
2
1
Marine Robotics Innovation Centre, Cyprus Marine and Maritime Institute, Vasileos Pavlou Square, Larnaka 6023, Cyprus
2
Department of Mechanical Engineering and Materials Science and Engineering, Cyprus University of Technology, Limassol 3041, Cyprus
*
Author to whom correspondence should be addressed.
Robotics 2021, 10(4), 126; https://doi.org/10.3390/robotics10040126
Submission received: 30 September 2021 / Revised: 16 November 2021 / Accepted: 18 November 2021 / Published: 22 November 2021
(This article belongs to the Special Issue Autonomous Marine Vehicles)

Abstract

:
This paper describes the development and experimental validation of algorithms for a novel laser vision system (LVS), suitable for measuring the relative posture from both solid and mesh-like targets in underwater environments. The system was developed in the framework of the AQUABOT project, a research project dedicated to the development of an underwater robotic system for inspection of offshore aquaculture installations. In particular, an analytical model for three-medium refraction that takes into account the nonlinear hemispherical optics for image rectification has been developed. The analytical nature of the model allows the online estimation of the refractive index of the external medium. The proposed LVS consists of three line-lasers within the field of view of the underwater robot camera. The algorithms that have been developed in this work provide appropriately filtered point-cloud datasets from each laser, as well as high-level information such as distance and relative orientation of the target with respect to the ROV. In addition, an automatic calibration procedure, along with the accompanying hardware for the underwater laser vision system has been developed to reduce the calibration overhead required by regular maintenance operations for underwater robots operating in seawater. Furthermore, a spatial image filter was developed for discriminating between mesh and non-mesh-like targets in the LVS measurements. Finally, a set of experiments was carried out in a controlled laboratory environment, as well as in real conditions at offshore aquaculture installations demonstrating the performance of the system.

1. Introduction

Underwater robotics has received increasing interest from research and industry during the last years. Underwater robotic systems are used in a wide range of applications, ranging from exploration and mapping of underwater environments to monitoring and inspection of undersea infrastructures, such as pipes and ships [1,2,3]. Underwater operations typically raise more challenges than equivalent ground or air operations. Typical examples of such challenges relate to communications constraints, localization due to the unstructured environment, as well as temperature and pressure variations.
The motivation of this work was the development of one of the sensing modalities of a robotic visual inspection system suitable for underwater operations. The particular modality, which is based on visual information, enables the relative posture estimation of the underwater robot with respect to the aquaculture fishnet without having to introduce any modifications to the aquaculture’s structure.
The main contributions (For contributions 1 and 2 were originally proposed in [4,5] and are presented here in a refined archival-quality form. The filter for mesh-like structures is introduced in the current paper.) of this work are:
  • An analytical model for a three-medium refraction that takes into account the nonlinear hemispherical optics for image rectification and refractive index estimation of the external medium;
  • An automatically calibrated laser vision system (LVS) suitable for measuring the relative posture from both solid and mesh-like targets in underwater environments;
  • A spatial filter for discriminating LVS measurements from mesh-like structures and/or measurements from other artifacts in underwater environments.
In particular, contributions 1 and 2 constitute extension and refinement of our previous work [4,5], by carrying out more experiments in real conditions at sea that are presented in current work while we introduce the complete framework for the proposed methodology for the first time. In addition, new experiments have been carried out that show the effect of using the dome model and comparing them to the results without it.
The problem of the camera calibration in the air has been widely investigated in research works [6,7]. However, the implementation of these techniques does not fit perfectly for cameras that are housed by another medium due to the multi-medium refraction phenomenon. The refractive index determines how much light is bent, or refracted, when light travels from one medium to another. Most of the research that has been performed so far concerns applications where the camera is housed by a flat surface. In particular, [8] presents a physics-based model that is parameterized by the distance of the lens from the medium interface, besides the focal length. The importance of the proposed model is that the physical parameters are calibrated by a simple approach that can be based on a single frame, improving the calibration accuracy. Another approach for calibrating the parameters of an underwater camera housing is presented in [9], based on an analysis-by-synthesis allowing to calibrate the parameters of a light propagation model for the local water body. On the other hand, underwater robots are typically designed with hemispherical domes for mitigating the hydrodynamics effects (reducing the drag coefficient) and increasing the mechanical withstand in high pressure that is applied in high depths operations. A camera calibration approach that takes into account the hemispherical refraction for underwater vision system is presented in [10], where a comparative study of the errors induced by refraction is provided when cameras are mounted behind hemispherical or planar air/water interfaces.
Structured light systems are widely used in vision-based systems to perform a wide range of applications, such as 3D reconstruction, scanning, and range measurements [11,12,13]. Underwater laser vision systems have been studied in several previous works, using dot or line lasers [14,15,16]. In [15], a methodology for defining the position vector of an ROV is proposed using the ROV camera signal and the information provided by two laser pointers. In [14] a methodology of orientation estimation is also introduced, projecting a laser stripe on the image plane. A low-cost underwater laser range-finder based upon a simple camera and parallel laser line setup is proposed by [17], where the distance calculation is based on the pinhole camera model. In [18] a solution that utilizes each laser independently is presented, thus a range estimate is achieved for each laser. A stereo structured light system is proposed in [19] for underwater inspection operations. More specifically, two methods for calibrating a stereo structured light system for perception in dry or underwater environments are presented. An underwater LVS is proposed by [20] using a single laser pointer when the camera is housed by a dome. A line laser scanner, suitable for air and underwater 3D reconstruction, is presented in [21] using a cross line laser projector with a single fixed camera. In [22], an underwater laser triangulation sensor model with flat refractive interfaces is proposed. The proposed method is based on the pinhole camera model, and it assumes the laser plane is perpendicular to the protective window, thus the laser refraction is not taken into account as well as the refraction with the glass surface. Another approach using laser triangulation for underwater operations is presented in [23] where two triangulation methods are compared, ray-based and elliptical cone methods. In [24], the calibration and operation of an underwater laser triangulation sensor are presented, designing a high-speed underwater scanning depth finder.
In this work, we introduce a methodology based on the development of an analytical model for hemispherical optics physics, that describes the path of light rays that refracted through three different interfaces; air, acrylic (dome), and water. This is required for appropriately interpreting the laser reflection images from an array of line lasers and producing the relative posture of the underwater robot with respect to a mesh-like target which can then be utilized for underwater localization, tracking, and navigation tasks. The developed methodology provides the capability of online estimation of the refractive index of the external medium and online adaptation of the model to the estimated refractive index [4]. This is particularly useful in operational scenarios where the underwater robot operates close to aquacultures (due to the presence of dissolved/liquid biomass in their proximity) or for the detection of leaks or pollutants that affect the refractive index of seawater. Utilizing the analytical model, another methodology has been developed to automate the LVS calibration process in air, a task that is typically required after every maintenance cycle. Determining the posture with respect to mesh-like structures is non-trivial in environments where additional artifacts cause laser reflections (e.g., fish, air-bubbles, undissolved waste products, etc.). In order to achieve this task, a spatial filter for discriminating reflections from mesh and non-mesh-like structures in order to correctly determine the relative posture of the ROV was developed.
The developments presented in this work were carried out to partially fulfilling the needs of the AQUABOT project [25]. A VideoRay Pro-4 ROV was used as the base platform for the experiments and a custom laser system was designed, constructed, and integrated with the VideoRay Pro-4 platform (see Figure 1a,b).
The rest of the paper is organized as follows: Section 2 provides the analytical model for a light ray that is refracted in three different mediums before it reaches the camera sensor, Section 3 provides a novel LVS that considers the hemispherical optics suitable for measuring the relative posture for solid and mesh-like structures in underwater environments. In Section 4, a spatial filter for mesh-like targets identification is presented while Section 5 demonstrates the proposed system in controlled laboratory and real applications experiments. Finally, Section 6 concludes with a summary discussion, conclusions, and future work.

2. Three-Medium Refractive Model, Calibration and Adaptation

2.1. Analytical Model

This section describes the methodology that is followed for the camera calibration and the analytical model which has been developed for the underwater camera-dome model. To obtain the camera’s intrinsic and extrinsic parameters in the air (single medium), a standard camera calibration procedure (without the hemispherical dome) was followed as described in [7]. Since the camera is now housed by another medium (acrylic dome) the camera parameters are changed. Thus, the pinhole model of the camera is not valid anymore and a new calibration procedure is required. Therefore, the model that is described in this section has been developed for camera-dome calibration. As it is well known by Snell’s law when a light ray changes its medium it is refracted with a certain angle. Figure 2 shows the propagation of light from a light source U that is located in the external medium (environment) and it is refracted in two points p D s and A before it reaches the CCD sensor. Assuming that the point P denotes the corresponding pixel that lights up in the image plane expressing it in the  < S > coordinate system while L indicates the center of the lens of the camera with respect to < S > coordinate system. Note that the  < S > coordinate system has its origin on the center of the dome. We assume that an arbitrary coordinate system < S > is located also at the origin of the dome and U L , and P are expressed with respect to the  < S > coordinate system as follows:
P = L + R ( r o t ) · u f v
where P denotes the location of the pixels on the image plane, L = [ X l , Y l , Z l ] T is the lens position w.r.t < S > , R ( r o t ) is the image-plane rotation matrix, u and v denote the pixel coordinates in the image-plane, and f is the focal length of the camera. Hereafter, when we are referring to the image plane it will be understood as the image plane corresponding to the pinhole model after the camera has been calibrated.
Let < S > be the coordinate system resulting from rotating < S > , according to R ( r o t ) . Thus, the rotating system is defined as follows:
P = L + u f v = X c Y l Z c + u f v
where P and L are now expressed in the  < S > coordinate system as indicated in Figure 2. Note that in this subsection the following analysis assumes that the reference coordinate system is < S > unless otherwise stated.
The position vector of a point on the radius of the internal dome r with spherical coordinates ( r , θ , ϕ ) can be expressed in Cartesian coordinates as:
x = r cos θ cos ϕ
y = r sin θ cos ϕ
z = r sin ϕ
Therefore, the point A that lies on the internal dome surface can be calculated by the position vector d P A as follows:
d P A ( λ 1 ) = P + λ 1 ( L P ) , λ 1 0 .
Note that for each value of the parameter λ the above equation gives the position vector of a point A on the internal dome surface. The λ 1 parameter can be determined using the Equations (2)–(4) as follows:
X c + u + λ 1 ( X l X c u ) = r cos θ cos ϕ
Y l f + λ 1 f = r sin θ cos ϕ
Z c + v + λ 1 ( Z l Z c v ) = r sin ϕ
The solution of the system gives the expression of the unknown λ 1 . Now, the angles ϕ and θ can be evaluated substituting λ 1 in the following equations:
ϕ = sin 1 Z c + v + λ 1 ( Z l Z c v ) r θ = sin 1 Y l f + λ 1 f r cos ϕ
To avoid singularities with angles calculation, conversion between Euler angles to quaternions has been used for the implementation of the method in the real system. The refraction is governed by Snell’s law relating the light paths of incident light and refracted light with respect to the surface normal of the refractive plane. Thus,
sin δ a sin δ d 1 = u 1 u 2 = n d n a .
Therefore, applying Snell’s Law the refractive angle is determined as follows:
δ d 1 = sin 1 n a n d sin δ a
where
δ a = cos 1 d P A · r S A | d P A | | r S A | .
The perpendicular vector C 1 on the plane can be defined by the cross product of vectors r S A and d P A , C 1 = r S A × d P A . Assuming that C 1 is also perpendicular to a unit vector ρ ^ 2 that lies in the refracted light ray. Therefore,
C 1 · ρ ^ 2 = 0
r S A · ρ ^ 2 = c o s ( π δ d 1 ) .
By using the unit vectors equation ( ρ ^ 2 ) = 1 , the value of ρ ^ 2 can be determined by solving the obtained system. Thus, the only unknown to evaluate the second vector is the parameter λ 2 , which is satisfied the equation below:
d P p D S ( λ 2 ) = d P A + λ 2 ρ ^ 2 , λ 2 0 .
To determine the value of the  λ 2 parameter a similar approach is followed as described for the first point, but now on the domain of the exterior hemisphere, with radius R. Therefore, the  λ 2 parameter can be determined by solving the following equation:
λ 2 2 + 2 λ 2 ( d P A · ρ ^ 2 ) + | d P A | 2 R 2 = 0 .
Since the value of λ 2 is calculated, the angles of refraction δ d 2 and δ w can be determined through Snell’s law. Finally, the point outside of the dome, from point p D S to U, is described by the equation below:
d P D U ( λ 3 ) = p D S + λ 3 ρ ^ 3 , λ 3 0 .
Note that the point p D S is calculated by the Equation (11). Thus, a generic function for the hemispherical dome model can be provided in the following form:
D ( u , v ) ( n ^ D S , p D S )
where ( u , v ) is the pixel coordinates on the image-plane, n D S is a unit vector directed to the light source U rooted at the point p D S on the dome surface, all in the  < S > coordinate system.

2.2. Model Calibration and Adaptive Refractive Index

The accuracy of the model depends heavily on the accuracy of the various dome parameters used. For this reason, a procedure for calibration of the analytical dome model was devised and is described below. The procedure consists of two steps. In the first step, a series of images of chessboards were taken from different, known locations in the air with respect to < S > as shown in Figure 3 and Figure 4. The images were analyzed using ROS and OpenCV, and a file was produced containing a series of image-plane points matched with their corresponding ( x , y , z ) coordinates in 3D space. This file was used for the second step of the dome calibration procedure. For the second step, a function of the dome projecting each image-plane point on the corresponding plane of known distance from the dome was developed. The function contains best-known values for all dome parameters, including image-plane position and orientation, lens position with respect to image-plane, dome size, and refractive index of each medium. More specifically, the parameters’ values that used are indicated in Table 1 and the intrinsic camera matrix is shown below:
K = 345.2356 0 365.0749 0 378.1393 302.6237 0 0 1 .
To determine each point U and perform the camera calibration, each pixel from the CCD sensor is traced towards a plane of known position from the origin of the dome < S > . The plane (chessboard placement) is perpendicularly placed along the y-axis of the ROV, as shown in Figure 1a. The measured distance from the ROV to the chessboard is taken from the center of the dome as can be seen in Figure 3.
The chessboard plane P cb in the  < S > coordinate system is provided by the equation of plane as:
r b , i n ^ b , i S = d b S
where r b , i is the position vector of a point lying on P cb and d b S , n ^ b S can be determined since P cb is known. The combination of Equations (13) and (15) eliminates the  λ 3 parameter and gives the expression below:
p b , i S = p D , i S + d b S p D , i S n ^ b S n ^ D , i S n ^ b S n ^ D , i S
where p b , i S is the intersection point between the line d P D U and the  P cb . As the chessboard and its squares are of known dimensions and placement, the actual point p a c t is of known ( x , y , z ) coordinate.
An error function was developed to represent the difference between the actual and the estimated 3D points ( p a c t p b , i ). The Levenberg–Marquardt algorithm [26] was used to minimize the error in the function parameter values, thus fine-tuning the parameters, obtaining the minimum error values based on the input points. In order to determine the refractive index of the medium during a normal operation of the ROV, a target is attached at a known location in view of the image-plane (attached on ROV), as shown in Figure 5 (lower right corner). A similar algorithm to the one used for dome calibration above was developed by the authors [4] with the only variable parameter being the refractive index of the medium (outside the dome). A background code runs the algorithm while the ROV is operated, indicating the estimated refractive index on-screen.

3. Laser Vision System (LVS)

3.1. Approach

This section describes the methodology for the LVS (Figure 1) development of both the hardware and the accompanying algorithms. The aluminium cases and the accompanying electronics for the LVS are custom made and they were designed and built by the authors for the needs of underwater operations. Since the aquacultures are marine habitats, the specifications of the lasers were chosen to be safe and reduce strain to the fish while having the appropriate specifications for marine operations. Several studies carried out for the light absorption in seawater show that the red light (high wavelength) is absorbed first and the minimum absorption is between the green and blue wavelengths [17,27]. The lasers’ specifications that are used in the proposed system are as follows: wavelength 532 nm with output power <20 mW. The lasers are mounted in different directions for covering a wide range of applications, such as the seabed, the sea surface, obstacles avoidance, object geometry recovery, and range measurements of particular targets. More specifically, a threshold filter is used on the rectified image, as described in Section 5.2 using the OpenCV library [28] for the laser light detection by the camera sensor. Since the proposed LVS consists of three lasers, a classification of the point cloud of each laser has to be made, enabling the identification of the belonging points to each laser. Thus, the thresholded image is partitioned into three areas (one for each laser) while an image mask is applied in order to eliminate the intersecting points, separating the lasers. Therefore, the procedure for recovering the distance measurements from the laser images is as follows:
Let p P i be the i’th image-plane pixel corresponding to a laser reflection on the target. Then from the dome model (Equation (14)) we have the following expression:
D ( p P i ) ( n ^ D i S , p D i S ) .
Hence, the line l i connecting point p D i S with the reflection target U is provided by:
l i ( λ ) = p D i S + λ n ^ D i S , λ 0 .
The laser plane for laser in the  < S > coordinate system is provided by:
r n ^ S = d S
where r is any point lying on the laser plane and parameters n ^ S and d S are the parameters of each laser determined via the calibration procedure. Then, the target position U in < S > , denoted by p w , i S is determined by taking the intersection of the line l i with the laser plane by eliminating λ as:
p w , i S = p D , i S + d S p D , i S n ^ S n ^ D , i S n ^ S n ^ D , i S
where d S , n ^ S are laser calibration parameters for each laser . Equation (14) combined with Equation (20) provide a target relative localization function of the form:
T ( p x , p y ) p w S .

3.2. Relative 3-D Posture Estimation to Mesh-like Targets

To estimate the relative posture to a mesh-like target the adopted approach is to fit the LVS measurements to a 3-D surface and then deduce the relative posture to the surface, as shown in Figure 6. In the current work, we will fit the LVS measurements from at least two line lasers target reflections to a 3-D plane.
Assume a line laser hits a target that is in front of the ROV camera sensor. The light reflection from the target produces a number of highlighted pixels ( n ) on the image-plane, giving a matrix P of dimensions n × 2 . Each of the line lasers used for the LVS system can potentially give a number of points (when it hits any target) that lay on a known plane (according to the position of the lasers). Therefore, the  P matrix contains the pixel coordinates of the  n l pixels. Then, from Equation (21) and with a slight abuse of notation we have that T ( P ) = P w , S are the 3-D positions of the laser reflections of the target on the CCD sensor in < S > . Note that the lasers are set up in such a way that the produced laser planes are non-coplanar, i.e., n i S × n j S 0 , i j in order to avoid ill-conditioning issues. Given k 2 non-co-planar line lasers 1 , k we can produce the augmented matrix:
P w S = P w , 1 S P w , k S .
Since the target fitted plane cannot physically pass from the origin, the plane equation in the  < S > frame can be written as:
r S · n ^ p S d S = 1
where r S is any vector connecting < S > with a point on the target. Then,
n ^ p S d S = ( P w S ) 1 1
where ( P w S ) is the left pseudo-inverse of P w S that is guaranteed to be non-singular as long as our lasers are non-co-planar and have non-co-linear target projections. From this, we can immediately extract the relative posture to the mesh-like target that comprises the plane’s normal n ^ p S and the distance to the plane d S > 0 , by noting that n ^ p S = 1 . Note that due to the planar symmetry assumption, only relative pitch and relative yaw can be determined, i.e., rotation of the ROV along the roll direction cannot be recovered.
Now, using the unit direction vectors x ^ S , y ^ S , z ^ S of the coordinate system < S > , the relative pitch can be extracted by projecting the plane’s normal vector along the  y z plane, as:
θ S = π 2 a c o s ( n ^ p S ) T · ( I x ^ S · x ^ S T ) · z ^ S ( n ^ p S ) T · ( I x ^ S · x ^ S T )
and the relative yaw can be extracted by projecting the plane’s normal vector along the  y x plane, as:
ψ S = π 2 a c o s ( n ^ p S ) T · ( I z ^ S · z ^ S T ) · x ^ S ( n ^ p S ) T · ( I z ^ S · z ^ S T ) .

3.3. Automatic Calibration

Underwater vehicles are often exposed to harsh conditions, i.e., seawater (salinity), sun, and overworked by pressure. For this reason, the components and the system require regular maintenance, such as cleaning and sealing tests after every operation. Maintenance procedures require regular disassembly of the LVS as well, which, in turn, requires re-calibration to ensure accurate readings. In this subsection, an automatic calibration procedure for the LVS is provided. Figure 7 shows the developed calibration box which is used for the LVS re-calibration procedure.
A coordinate system < G > is defined for the box. The four box planes ((L)eft, (R)ight, (F)orward, (D)own) are defined as:
r b · n ^ b G = d b , b L , R , F , D .
The ROV is located in an initially unknown position S G and with an initially unknown orientation R S G in the calibration box. A set of n p patterns are applied at known positions P i G , i 1 , n p in the box. Let p P i be the pixel corresponding to pattern i. Then, from the dome model we have:
D ( p P i ) ( n ^ D i S , p D i S ) , i 1 , n p .
Hence, the line l i connecting point p D i S with P i is provided by:
l i ( λ ) = p D i S + λ i n ^ D i S , λ 0 .
The laser plane for laser in the  < S > coordinate system is provided by:
r n ^ S = d S .
The automatic laser calibration then consists of the procedure to determine the parameters n ^ S and d S of each laser . To achieve this, the  S G and R S G parameters have to be determined first. Thus, for each pattern the following equation is held:
P i G = S G + R S G ( p D i S + λ i n ^ D i S ) , i 1 , n p .
Since the rotation matrix, R S G is a function of three Euler angles, the above equations form a system of 3 n p equations with n p + 6 unknowns. This implies that at least 3 patterns are needed to solve the problem. However additional patterns will increase the accuracy of the calibration, also removing ambiguity issues. Hence, a cost function is formed:
V ( S G , R S G , λ 1 , λ n p ) = i = 1 n p P i G + S G + R S G ( p D i S + λ i n ^ D i S ) 2
where the  S G , R S G , λ 1 , λ n p can be determined by the solution of the nonlinear minimization problem:
( S G , R S G , λ 1 , λ n p ) = arg min V ( S G , R S G , λ 1 , λ n p ) .
The next step requires the projection of each laser on at least two separate box planes (it is sufficient that the projected line on the two planes is different). From  S G , R S G that were calculated in the previous step, and using the plane equations, for each reflected target point j b from plane b , we have:
( S G + R S G ( p D j b S + λ j b n ^ D j b S ) · n ^ b G = d b .
However, since by construction d 0 , λ j b should also satisfy:
( p D j b S + λ j b n ^ D j b S ) · n ^ S d S = 1 .
Therefore, by solving the Equation (34) for λ j b and substituting all λ j b ’s in Equation (35), we can write each result as a row of the matrix Y . Then:
n ^ S d S = Y 1 1
where Y is the left pseudo-inverse of Y where the laser calibration parameters ( n ^ S and d S ) can be extracted by noting that n ^ S = 1 .

4. A Filter for Mesh-like Structures

4.1. Preliminaries

In this section, a spatial filter is developed to discriminate mesh-like structures from other artifacts in the LVS measurements. This is useful in the case that the relative posture is sought with respect to mesh-like structures such as fishnets, Figure 6, where the LVS should be able to discern the difference between fishnet, fish, and air bubble reflections. The aim of the proposed filter is to only allow reflections from the mesh-like spatial structure that is being observed as a target. Additionally, note the additional (faint) reflections from the water tank surface to the water tank wall. Spatial filtering using the Fourier transformation has been reported for the identification of fabric structures in images in [29]. In our approach, we exploit the point cloud produced by the LVS in the  < S > coordinate system. We assume that multi-path reflections (e.g., light beams that reflect on the water surface and then hit a target) either do not exist or that they are weaker than single path reflections. This assumption is valid since was observed during the lab and sea experiments. Now observing that all LVS measurements (excluding multi-path reflections) are co-planar, i.e., they reside in the laser plane, a binary image containing the laser-plane reflections which is co-planar with the laser-plane can be created. Thus, an appropriate filter can be developed to discern between reflections that belong to the mesh-like structure and ones that do not.

4.2. Approach

Mesh-like structures have the characteristic of being periodic in 2-dimensions. Intersecting such structures with a straight line yields intersection points that appear with predetermined regularity. Depending on the mesh geometry, a maximum and a minimum distance can be derived for the intersection points as we traverse the intersecting line. Thus, the distance between consecutive intersections will lie between a maximum and a minimum range. This observation is applicable for almost all positions of the intersecting line. Singular (i.e., in the sense that such an occurrence has zero probability when randomly placing the intersecting line) positions occur only when the intersecting line is collinear with edges on the target. Assuming a laser plane hitting such a target, we can define a minimum and a maximum distance between consecutive reflections from the target in the 3D space, irrespective of the target’s orientation. In the development of the filter, reflections from target features that are collinear with the laser line will be ignored, as such features are considered to be either singularities or non-mesh-like targets. For example, in Figure 8a the maximum distance between consecutive reflections is the diameter of the hexagon and the minimum distance is the side of the hexagon. In Figure 8b, the maximum distance is the diagonal of the rectangle and the minimum distance is half the diagonal. Hence, every reflection from such targets will always fall between the minimum and the maximum period with one of its neighboring reflections.

4.2.1. Laser Plane Image

The first step is to create the binary image of the LVS reflections in the laser plane. To perform this, a rotation of the point cloud p w S that lies on the laser plane, to the x y plane is performed. First let us fix an ortho-canonical coordinate system < > on the laser plane, with direction unit vectors expressed in the  < S > frame as e x , , e y , , e z , :
Let
e z , = n ^ S .
Then,
e y , = e y , S I e z , e z , T e y , S I e z , e z , T
where e y , S is the y-direction in the  < S > frame (forward laser looking direction) and,
e y , = e z , × e x , .
Now create the rotation matrix from frame < > to frame < S > as:
R S = e x , e y , e z , T .
Then, the 3-D positions of the laser reflections P w S can be translated and rotated to fall in the  x y plane of < S > as follows:
p w , i S , S = R S ( p w , i S e z , · d S ) .
Note that the z-component of p w , i S , S is zero, and  P w S , S is a 2D point-cloud. Hence, P w S , S can be represented with a binary image.
Define q i x p w , i S , S · [ 1 0 0 ] T and q i y p w , i S , S · [ 0 1 0 ] T . Let x L = min i q i x , x R = max i q i x , y U = max i q i y , y D = min i q i y represent the four boundaries of the laser plane. Let d x , d y represent the spatial discretization of the binary image (this depends primarily on the characteristics of the target—i.e., thread diameter, assuming the LVS has adequate resolution).
Assume a grid with discretization d x , d y with n x = x R x L d x columns and n y = y U y D d y rows. Create an array of index sets S i d x as follows:
S i d x ( m , n ) = i m · d x q i x x L < m · d x + 1 n · d y q i y y L < n · d y + 1 .
The binary image is then defined as follows:
I m ( m , n ) = 1 S i d x ( m , n ) > 0 0 S i d x ( m , n ) = 0
where | · | denotes the set cardinality.

4.2.2. Binary Image Processing Filter

The aim of the proposed filter is to identify points that belong to the mesh-like structure. Let d θ be the thread diameter of the mesh-like structure. Let r m i n be the minimum distance between consecutive threads (center-to-center) as these are intersected in the laser plane and r m a x the corresponding maximum distance. Then, a reflection from a target thread will have a size of r d θ = d θ c o s θ l t where θ l t the angle between the laser plane and the target thread. The choice of θ l t should be such that r d θ < r m a x , otherwise the target periodicity assumption will not be valid. In practice, we want to have r d θ < < r m i n , which is achievable for typical mesh-like targets assuming appropriate configuration of the laser-plane roll angle. The starting elements of the filter (see Figure 9) can be constructed by defining three regions via three concentric circles: Region A with diameter r d θ , region B with radius r min r d θ 2 that excludes region A, and region C with radius r max + r d θ 2 that excludes regions B and A.
Regions A, B and C define three binary valued structuring elements S t r A , S t r B and S t r C , with the same discretization as the binary image, that are 1 in their respective regions and 0 outside.
We can now start defining our filtering algorithm. The binary image I m is padded with ρ p a d = r max + r d θ 2 cells to obtain the image I m . To determine the morphological operations the required limits have to be established.
Assume now that we have a laser reflection from the center of a thread of a mesh-like target centered at the  m , n pixel of I m and the  S t r A is placed at that location, i.e.,  S t r A ( m , n ) . Considering the grid discretization, and assuming d x = d y = d d the thread reflection will encompass at most d g θ = r d θ d d + 1 grid cells and this is the maximum number of cells that they are expected to be active in I m S t r A ( m , n ) . Then, I m S t r B ( m , n ) will be empty and I m S t r C ( m , n ) will contain at most 2 d g θ cells.
Assume now that we have a laser reflection from the edge of a thread of a mesh-like target centered at the  m , n pixel of I m and we place S t r A at that location. Then, only from d g θ 2 up to d g θ cells will be active in I m S t r A ( m , n ) . In  I m S t r B ( m , n ) we can have at most d g θ 2 cells active, and in I m S t r C ( m , n ) will contain at least 3 d g θ 2 + 1 cells active.
Almost all reflections (excluding singularities) from the mesh-like target will fall within the ranges provided above. A new binary image I m can now be created that only contains pixels that belong to the mesh-like structure. The set of actual points that correspond to the mesh-like structure can be recovered by feeding into S i d x ( m , n ) the pixels of I m that are active. The filtering algorithm provided by Algorithm 1, returns the 3-D positions of the laser reflections only from the mesh-like structure, P w S * .
Algorithm 1 Mesh filter algorithm
Require: I m , S t r A , S t r B , S t r C
Ensure: Mesh Reflections from mesh-like structure
1: loop ( m , n )   1 n x × 1 n y
2:   ( m , n ) ( m + ρ p a d , n + ρ p a d )
3:  if  d g θ 2 I m S t r A ( m , n ) d g θ  then
4:    if  0 I m S t r B ( m , n ) d g θ I m S t r A ( m , n )  then
5:     if  3 d g θ 2 I m S t r C ( m , n ) 2 d g θ  then
6:         I m ( m , n ) ← 1
7:     end if
8:    end if
9:  end if
10: end loop
11: k← 1
12: loop   ( m , n )   1 n x × 1 n y
13:   if  I m ( m , n )  then
14:      I d x S i d x ( m , n ) ,
15:     L I d x
16:     loop j 1 L
17:      P w , k S * p w , j S
18:     k k + 1
19:     end loop
20:  end if
21: end loop
22: return  P w S *

5. Experiments

5.1. Experimental Setup

In this section, the experimental setup for the system and the experimental results obtained during the performance evaluation of the system are presented. The validity of the proposed system is verified both in a laboratory-controlled environment setting Figure 10, as well as in real conditions at an offshore aquaculture installation Figure 11.
A Videoray Pro IV (ROV) was used for the experiments. The robot is equipped with a CCD camera, a Tilt Compensated Compass (TCC) sensor capable of providing roll, pitch, and yaw measurements, the LVS that was analyzed in this work, and a custom-developed control station. Furthermore, the robot was retrofitted with an inertial measurement unit (IMU) providing its acceleration and angular rates. The default ROV camera resolution is 720 × 576p with a frame-rate of 25 fps. The system software was developed in C/C++, using the Robotic Operating System (ROS) [30] and OpenCV [28] on a 12.04 Ubuntu Linux OS. The PC that was used for the experiments was an Intel i7, a dual-core laptop, with 8GB RAM memory. The ROV is powered by a boat-mounted generator through a tether whereas the laser system is powered by a dedicated battery.

5.2. Experimental Evaluation of the Mesh Filter Algorithm

In this section, the experimental validation of the methodology that is proposed in Section 4 is presented. In Figure 12a, a mesh-like target as seen from the ROV camera is depicted, while Figure 12b depicts the corresponding thresholded image. Using the hemispherical dome model that was developed in Section 2, the laser point clouds are extracted. The point cloud corresponding to the horizontal laser is depicted in Figure 13.
As can be seen from Figure 13, the laser point cloud contains reflections from the mesh-like structure, the water tank floor, and walls, as well as from the solid (non-mesh) object. From Figure 13, we can distinguish the different reflections from the laser point cloud. Considering only the horizontal laser line for this experiment, the reflections from the water tank floor can be easily recognized and eliminated because they are beyond the limits of the operational range zone. Note that the operational range zone is between 300 mm and 1000 mm since beyond this range the accuracy provided by the camera (pixels/cm 2 on the target) and, more importantly, the laser beam scattering, cause the algorithm to fail since the laser (thin) plane assumption is no longer valid. The operational range can be improved using better laser optics for the same medium and camera analysis. Hence, reflections that appear beyond these limits, are not taken into account in the mesh filter algorithm. In the operational range, the algorithm should be able to detect the mesh-like structure and eliminate other artifacts.
In order to identify which of the laser points belong to the mesh-like target, the Mesh Filter algorithm (Algorithm 1) was used. The mesh-like structure (mesh-target) used in the experiment has the following parameters: d θ = 2 mm, r d θ = 10 mm, d d = 1 mm, r m i n = 15 mm, r m a x = 35 mm.
Figure 14 depicts the result after applying the Mesh Filter algorithm. As can be seen, the algorithm has rejected all reflections that did not belong to the mesh-like structure enabling the identification of the target.

5.3. Experimental Evaluation of the LVS in the Laboratory

Two series of tests were performed in a controlled environment in order to evaluate the performance of the LVS system in the laboratory. The tests validated the performance of the LVS system for linear distance and angular measurements from a static target in front of the ROV. The first series of experiments included positioning and aligning the ROV at specific positions in the water tank and comparing the LVS measurements against the known posture of the ROV. The ROV was aligned perpendicular to the water tank wall and was mounted underwater at 374 mm, 438 mm, 525 mm, and 659 mm from the wall. At each position, an array of measurements was performed to determine the distance d S reported by the LVS. Figure 15 depicts the distance error percentage in the distance reported by the LVS. As can be seen, the LVS accuracy is within 2.3 % of the measured distance.
The second series of experiments included positioning the ROV at a constant distance from the water tank wall and changing the pitch and then the yaw angle of the ROV. The obtained measurements were compared against the relative orientation obtained through the TCC sensor. The results are depicted in Figure 16 for the relative pitch validation experiment and in Figure 17 for the relative yaw validation experiment.
As can be seen from both experiments the orientation error is within 14 % of the orientation reported by the TCC sensor. The main source of error in the experiments was caused by the scattering of the line laser beam in the transverse to the laser plane direction. This can be rectified by appropriately tuning the focus of the line laser beams or using better laser line optics.

5.4. LVS-Effect of Dome Model

In order to demonstrate the effect of using the dome model that has been developed in this work, Section 2, in contrast to not using it for laser measurements, an experiment was devised and performed in the controlled laboratory environment. In particular, the ROV was mounted in the water tank and a moving target was placed in various predefined range positions from 300 mm to 1250 mm. The measurements were taken at increments of 100 mm, from 300 mm to 1200 mm, and then at decrements of 100 mm from 1250 mm to 350 mm, thus having points of measurement every 50 mm along the range. Figure 18 shows the setup and an image of the target as seen from the ROV dome camera.
The experiment was performed using the central horizontal laser. For each image, three points on the laser line image were considered; the leftmost, the central, and the rightmost. Each of the three points was used to calculate the distance by (a) using the dome model and (b) without using the dome model. Figure 19 shows a comparison of the measurements with (Dome) and without (No Dome) the dome model from the leftmost point (L) and the three-point average (Av).
More specifically, Figure 19 shows the measurements for leftmost and the three-point average, comparing in each plot the measurements taken with and without the use of the dome model. The actual range distance is also plotted for comparison. The best line fit (linear regression) is plot through the points of each measurement. It is shown that the dome model measurements are almost a perfect match to the plotted actual range distances. The measurements taken without the dome model are shown to have an increasing error as the range distance increases, with a deviation of over 300 mm at the 1250 mm range.
Figure 20 shows the percentage errors corresponding to Figure 19 measurements. For the errors, the line fit was used to eliminate random errors of target placement during the experiment and demonstrate the improvements due to the use of the dome model. Both plots show the use of the dome model to have an approximate increase in accuracy by a factor of 5. The plots show a higher accuracy at the left side of the laser, possibly suggesting minor deregulation of the laser mounting during the experiment since the error along the laser line for the whole range of 300 mm to 1250 mm is below 5.3 % (max at 300 mm range).

5.5. Testing the LVS System at an Offshore Aquaculture Installation

The LVS system was tested at an offshore aquaculture installation (https://youtu.be/BfyOdJnzkkA accessed on 15 November 2021). The preliminary test described below aimed at identifying the various operation features and parameters required for using the LVS system for inspecting aquaculture fishnets in actual field scenarios. Figure 21 shows images of aquaculture fishnets in offshore conditions, as captured from the ROV dome camera while using the LVS system.
In order to inspect the fishnet, a downward (depth increase) motion was performed, while at the same time keeping an approximately constant distance to the fishnet as well as a constant heading (https://youtu.be/HN669_R-Gmk accessed on 21 November 2021). The motion and measurements of the LVS equipped ROV is shown in Figure 22 below. The first subplot shows the depth of the ROV as given by the pressure sensor. The second subplot shows the range of the target as given by the LVS system. This is the shorter distance from the center of the dome to the target fitted plane. The third sub-plot shows the readings from the tilt-compensated compass (heading) and the target yaw angle as calculated by the LVS system. This is the yaw angle of the target fitted plane with respect to the ROV body.
In Figure 22, it is obvious that there is some correlation between the heading and the target yaw angle as expected. When the ROV yaws in a rightward direction, its heading increases. At the same time, an ideal stationary flat plane target would be perceived by the LVS system as rotating leftwards, thus increasing its yaw angle. However, in offshore aquaculture applications, the target is far from ideal, with a number of factors that affect the LVS system output. Those factors include:
  • Shape of fishnet wall is not a flat surface;
  • Shape of fishnet wall is not the same around the fish cage. Mooring and fishnet stitching and support alter the shape that would ideally be circular;
  • Fishnet shape dynamically changes with sea currents. Folds and wavy surface features could develop on the fishnet surface during operation as can be seen in Figure 21a;
  • Fishnet shape obtains altered by marine growth due to surface deposits altering shape, and weight of marine growth on net and mooring lines pulling unevenly at both static and dynamic current direction conditions;
  • ROV position being altered by sea currents, forcing control manoeuvres to keep the required position.
Taking into account the above factors, Figure 22 shows the LVS system giving a clear indication of the range to the target surface. In combination with the target yaw angle also from the LVS system, they can be used to have a controlled motion for inspecting the fishnet surface.
Figure 23 provides additional experimental results from a different portion of the fishnet, during the field experiments demonstrated in Figure 11 and Figure 24a. In this experiment, we chose to evaluate the performance of the LVS by implementing a PID controller on the ROV to stabilize its distance and yaw angle with respect to the fishnet using the LVS as feedback. Note that the ROV pitch (and roll) angles are passively stable and non-actuated. Some correlations between the results are expected as discussed and analyzed in the previous results.
In addition, operating close to a non-flat target created coupling effects, e.g., as the yaw increased the laser reflected to a further curved portion of the net resulting in reduced range (minimum distance from target fitted plane) measurement, while also having parasitic effects on the pitch. This effect can be reduced by aiming the lasers closer to the center of the camera’s field of view. However, this will negatively affect the ROV operators’ field of view.

6. Conclusions

The work presented, details the development and experimental validation of algorithms for a novel laser vision system (LVS) for measuring an ROV’s posture with respect to both solid and mesh-like targets in underwater environments. An analytical model for three-medium refraction (air, acrylic, water) that takes into account the nonlinear hemispherical optics was developed. The system development was motivated by the need for underwater ROV localization in close proximity to aquaculture fishnets, and due to the analytic nature of the solution, it is applicable to operation to mediums with varying refractive index, extending its applicable range. The LVS is capable of generating information as point cloud sets from each laser and, by utilizing the proposed algorithms, high-level information like distance and relative orientation of the target with respect to the ROV can be recovered. Due to the regular maintenance required by the system (typical for underwater vehicles and their components), an automatic calibration technique was developed. Furthermore, a spatial filter was developed and demonstrated, in order to discriminate the mesh-like targets from other artifacts in the LVS measurements. The algorithms developed in this work are appropriate for online operation.
Experimental results both in the laboratory and field experiments at an offshore aquaculture installation demonstrate the performance of the system. The analytical dome model significantly increases the calibration accuracy for the underwater camera-dome models as it turns out from the experimental results in Section 5.4. During field experiments, the proposed system demonstrated the expected performance that was recorded during laboratory testing, while the limitations of the system’s performance when operating near the water surface during intense sunlight Figure 24b and when operating in an environment with increased biomass were recorded. More specifically, due to algae growth and increased scattering due to higher biomass concentration in the proximity of the fishnet as can be seen in Figure 21 and Figure 24a, the mesh-like structure assumption was not valid on those portions of the fishnet and, hence, the mesh filter algorithm was not applicable there and also in cases where strong currents cause folding of the fishnet. Those scenarios will be considered in future works.
Further development of this work is towards an adaptive version of the proposed mesh filter that will be able to give estimation on variable biomass growth levels, appropriately adapting the thread diameter d θ and other filter parameters. In addition, research towards utilizing the developed localization system to close the feedback loop in motion task planning strategies is being considered, for autonomous underwater structure inspection and maintenance tasks for operations that include but are not limited to aquaculture fishnet fault detection and repair.

Author Contributions

Conceptualization, C.C.C. and S.G.L.; methodology, C.C.C., G.P.G., and S.G.L.; software, C.C.C. and G.P.G.; validation, C.C.C., G.P.G.; formal analysis, C.C.C.; investigation, C.C.C.; resources, S.G.L.; data curation, C.C.C. and G.P.G.; writing—original draft preparation, C.C.C.; writing—review and editing, S.G.L., G.P.G.; supervision, S.G.L.; project administration, S.G.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was co-funded by the European Regional Development Fund and the Republic of Cyprus through the Research Promotion Foundation under research grant AEI Φ OPIA/ Γ E Ω P Γ O/0311 (BIE)/08. S.L. and G.G. would also like to acknowledge the contribution of European Union’s Horizon 2020 research and innovation program under grant agreement 824990 (RIMA). C.C would also like to acknowledge the contribution of the EU H2020 Research and Innovation Programme under GA No. 857586 (CMMI-MaRITeC-X).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Yuh, J. Design and control of autonomous underwater robots: A survey. Auton. Robot. 2000, 8, 7–24. [Google Scholar] [CrossRef]
  2. Whitcomb, L.L. Underwater robotics: Out of the research laboratory and into the field. In Proceedings of the 2000 ICRA, Millennium Conference, IEEE International Conference on Robotics and Automation, Symposia Proceedings, San Francisco, CA, USA, 24–28 April 2000; Volume 1, pp. 709–716. [Google Scholar]
  3. Bogue, R. Underwater robots: A review of technologies and applications. Ind. Robot. Int. J. 2015, 42, 186–191. [Google Scholar] [CrossRef]
  4. Constantinou, C.C.; Loizou, S.G.; Georgiades, G.P.; Potyagaylo, S.; Skarlatos, D. Adaptive calibration of an underwater robot vision system based on hemispherical optics. In Proceedings of the Autonomous Underwater Vehicles (AUV), 2014 IEEE/OES, Oxford, MS, USA, 6–9 October 2014; pp. 1–5. [Google Scholar]
  5. Constantinou, C.C.; Loizou, S.G.; Georgiades, G.P. An underwater laser vision system for relative 3-D posture estimation to mesh-like targets. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, 9–14 October 2016; pp. 2036–2041. [Google Scholar]
  6. Tsai, R.Y. A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE J. Robot. Autom. 1987, 3, 323–344. [Google Scholar] [CrossRef] [Green Version]
  7. Heikkila, J.; Silvén, O. A four-step camera calibration procedure with implicit image correction. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Juan, PR, USA, 17–19 June 1997; pp. 1106–1112. [Google Scholar]
  8. Treibitz, T.; Schechner, Y.Y.; Kunz, C.; Singh, H. Flat refractive geometry. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 51–65. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Jordt-Sedlazeck, A.; Koch, R. Refractive calibration of underwater cameras. Comput. Vis. ECCV 2012, 7576, 846–859. [Google Scholar]
  10. Kunz, C.; Singh, H. Hemispherical refraction and camera calibration in underwater vision. In Proceedings of the OCEANS 2008, Quebec City, QC, Canada, 15–18 September 2008; pp. 1–7. [Google Scholar]
  11. Castillón, M.; Palomer, A.; Forest, J.; Ridao, P. State of the Art of Underwater Active Optical 3D Scanners. Sensors 2019, 19, 5161. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Ribo, M.; Brandner, M. State of the art on vision-based structured light systems for 3D measurements. In Proceedings of the International Workshop on Robotic Sensors: Robotic and Sensor Environments, Ottawa, ON, Canada, 30 September–1 October 2005; pp. 2–6. [Google Scholar]
  13. Roman, C.; Inglis, G.; Rutter, J. Application of structured light imaging for high resolution mapping of underwater archaeological sites. In Proceedings of the OCEANS 2010, Sydney, NSW, Australia, 24–27 May 2010; pp. 1–9. [Google Scholar]
  14. Czajewski, W.; Sluzek, A. Development of a laser-based vision system for an underwater vehicle. In Proceedings of the International Symposium on Industrial Electronics, ISIE’99, Bled, Slovenia, 12–16 July 1999; Volume 1, pp. 173–177. [Google Scholar]
  15. Karras, G.C.; Panagou, D.J.; Kyriakopoulos, K.J. Target-referenced localization of an underwater vehicle using a laser-based vision system. In Proceedings of the OCEANS 2006, Boston, MA, USA, 18–21 September 2006; pp. 1–6. [Google Scholar]
  16. Wang, C.; Shyue, S.; Hsu, H.; Sue, J.; Huang, T. CCD camera calibration for underwater laser scanning system. In Proceedings of the OCEANS 2001, MTS/IEEE Conference and Exhibition, Honolulu, HI, USA, 5–8 November 2001; Volume 4, pp. 2511–2517. [Google Scholar]
  17. Cain, C.; Leonessa, A. Laser based rangefinder for underwater applications. In Proceedings of the American Control Conference (ACC), Montreal, QC, Canada, 27–29 June 2012; pp. 6190–6195. [Google Scholar]
  18. Hansen, N.; Nielsen, M.C.; Christensen, D.J.; Blanke, M. Short-Range Sensor for Underwater Robot Navigation using Line-lasers and Vision. IFAC-PapersOnLine 2015, 48, 113–120. [Google Scholar] [CrossRef] [Green Version]
  19. Lopes, F.; Silva, H.; Almeida, J.M.; Martins, A.; Silva, E. Structured light system for underwater inspection operations. In Proceedings of the OCEANS 2015, Genova, Italy, 18–21 May 2015; pp. 1–6. [Google Scholar]
  20. Muljowidodo, K.; Rasyid, M.A.; SaptoAdi, N.; Budiyono, A. Vision based distance measurement system using single laser pointer design for underwater vehicle. Indian J. Mar. Sci. 2009, 38, 324–331. [Google Scholar]
  21. Bleier, M.; Nüchter, A. Low-Cost 3D Laser Scanning in Air Orwater Using Self-Calibrating Structured Light. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 42, 105. [Google Scholar] [CrossRef] [Green Version]
  22. Matos, G.; Buschinelli, P.; Pinto, T. Underwater Laser Triangulation Sensor Model with Flat Refractive Interfaces. IEEE J. Ocean. Eng. 2020, 45, 937–945. [Google Scholar] [CrossRef]
  23. Palomer, A.; Ridao, P.; Forest, J.; Ribas, D. Underwater laser scanner: Ray-based model and calibration. IEEE/ASME Trans. Mechatronics 2019, 24, 1986–1997. [Google Scholar] [CrossRef]
  24. Chantler, M.J.; Clark, J.; Umasuthan, M. Calibration and operation of an underwater laser triangulation sensor: The varying baseline problem. Opt. Eng. 1997, 36, 2604–2611. [Google Scholar] [CrossRef]
  25. AQUABOT. 2018. Available online: www.aquaculturebot.com (accessed on 10 May 2018).
  26. Lourakis, M.I. A brief description of the Levenberg-Marquardt algorithm implemented by levmar. Found. Res. Technol. 2005, 4, 1–6. [Google Scholar]
  27. Wozniak, B.; Dera, J. Light Absorption in Sea Water; Springer: New York, NY, USA, 2007; Volume 33. [Google Scholar]
  28. Bradski, G.; Kaehler, A. Learning OpenCV: Computer Vision with the OpenCV Library; O’Reilly Media, Inc.: Sebastopol, CA, USA, 2008; ISBN 9780596516130. [Google Scholar]
  29. Xu, B. Identifying fabric structures with fast Fourier transform techniques. Text. Res. J. 1996, 66, 496–506. [Google Scholar]
  30. Quigley, M.; Faust, J.; Foote, T.; Leibs, J. ROS: An open-source Robot Operating System. ICRA Workshop Open Source Softw. 2009, 3, 5. Available online: www.ros.org (accessed on 10 September 2021).
Figure 1. Design and integration of the LVS on the Videoay Pro-4 platform. (a) CAD Model of ROV; (b) Integrated System with Laser Vision System.
Figure 1. Design and integration of the LVS on the Videoay Pro-4 platform. (a) CAD Model of ROV; (b) Integrated System with Laser Vision System.
Robotics 10 00126 g001
Figure 2. Geometric setup of the hemispherical refraction problem. The angles are exaggerated for demonstration purposes.
Figure 2. Geometric setup of the hemispherical refraction problem. The angles are exaggerated for demonstration purposes.
Robotics 10 00126 g002
Figure 3. The measured distance from the ROV to the chessboard is taken from the center of the dome < S > .
Figure 3. The measured distance from the ROV to the chessboard is taken from the center of the dome < S > .
Robotics 10 00126 g003
Figure 4. Chessboard patterns at know locations for the calibration of the dome model.
Figure 4. Chessboard patterns at know locations for the calibration of the dome model.
Robotics 10 00126 g004
Figure 5. The refractive index is calculated by the attached target in the field of view of the ROV’s camera.
Figure 5. The refractive index is calculated by the attached target in the field of view of the ROV’s camera.
Robotics 10 00126 g005
Figure 6. Using a line-laser based LVS to determine the relative posture to a mesh-like target. (a) Mesh-like target; (b) Fishnet structure; (c) Aquaculture fishnet.
Figure 6. Using a line-laser based LVS to determine the relative posture to a mesh-like target. (a) Mesh-like target; (b) Fishnet structure; (c) Aquaculture fishnet.
Robotics 10 00126 g006
Figure 7. Calibration box for the LVS. (a) Lasers Calibration Box as seen by the ROV camera; (b) ROV with Laser Vision System.
Figure 7. Calibration box for the LVS. (a) Lasers Calibration Box as seen by the ROV camera; (b) ROV with Laser Vision System.
Robotics 10 00126 g007
Figure 8. Mesh-like geometries: (a) hexagonal mesh and (b) diamond mesh.
Figure 8. Mesh-like geometries: (a) hexagonal mesh and (b) diamond mesh.
Robotics 10 00126 g008
Figure 9. Structuring element regions.
Figure 9. Structuring element regions.
Robotics 10 00126 g009
Figure 10. Laboratory mock-up experiment.
Figure 10. Laboratory mock-up experiment.
Robotics 10 00126 g010
Figure 11. Offshore aquaculture installations experiment. (a) The ROV operates an inspection task at aquaculture installation; (b) Aquaculture inspection operation by the proposed system.
Figure 11. Offshore aquaculture installations experiment. (a) The ROV operates an inspection task at aquaculture installation; (b) Aquaculture inspection operation by the proposed system.
Robotics 10 00126 g011
Figure 12. Mesh-like target. (a) Mesh-like target as seen by the ROV camera; (b) Thresholded Image.
Figure 12. Mesh-like target. (a) Mesh-like target as seen by the ROV camera; (b) Thresholded Image.
Robotics 10 00126 g012
Figure 13. Laser point-cloud data (only for the horizontal laser).
Figure 13. Laser point-cloud data (only for the horizontal laser).
Robotics 10 00126 g013
Figure 14. Mesh filter algorithm’s results. (a) The pink points denote all the reflections while the blue points indicate the mesh like structure points after the algorithm has been applied; (b) Mesh-like structure points. Notice the ± 2.5 mm separation of the same point target reflection, caused due to the scattering of the laser beam e.g., at (50 mm, 643 mm).
Figure 14. Mesh filter algorithm’s results. (a) The pink points denote all the reflections while the blue points indicate the mesh like structure points after the algorithm has been applied; (b) Mesh-like structure points. Notice the ± 2.5 mm separation of the same point target reflection, caused due to the scattering of the laser beam e.g., at (50 mm, 643 mm).
Robotics 10 00126 g014
Figure 15. Distance error percentage versus the actual distance to the target.
Figure 15. Distance error percentage versus the actual distance to the target.
Robotics 10 00126 g015
Figure 16. Relative pitch validation experiment in the laboratory water tank. The green line denotes the measurements of the LVS while the orange line is the measurement from TCC.
Figure 16. Relative pitch validation experiment in the laboratory water tank. The green line denotes the measurements of the LVS while the orange line is the measurement from TCC.
Robotics 10 00126 g016
Figure 17. Relative yaw validation experiment in the laboratory water tank. The green line denotes the measurements of the LVS while the orange line is the measurement from TCC.
Figure 17. Relative yaw validation experiment in the laboratory water tank. The green line denotes the measurements of the LVS while the orange line is the measurement from TCC.
Robotics 10 00126 g017
Figure 18. Experimental setup. (a) ROV mounted on the water tank wall; (b) Moving target as it seems from ROV camera.
Figure 18. Experimental setup. (a) ROV mounted on the water tank wall; (b) Moving target as it seems from ROV camera.
Robotics 10 00126 g018
Figure 19. Measurements with and without the dome model. (a) Left laser points; (b) Average laser points.
Figure 19. Measurements with and without the dome model. (a) Left laser points; (b) Average laser points.
Robotics 10 00126 g019
Figure 20. Measurement percentage error.
Figure 20. Measurement percentage error.
Robotics 10 00126 g020
Figure 21. Fishnet images of ROV using the LVS system.
Figure 21. Fishnet images of ROV using the LVS system.
Robotics 10 00126 g021
Figure 22. Relative yaw from the ROV to the target (fishnet) at sea.
Figure 22. Relative yaw from the ROV to the target (fishnet) at sea.
Robotics 10 00126 g022
Figure 23. Experimental results of the LVS at an offshore aquaculture installation. (a) Minimum distance to fitted plane; (b) Relative yaw and pitch angles to fitted plane.
Figure 23. Experimental results of the LVS at an offshore aquaculture installation. (a) Minimum distance to fitted plane; (b) Relative yaw and pitch angles to fitted plane.
Robotics 10 00126 g023
Figure 24. Experimental evaluation at an offshore aquaculture installation. (a) The mesh filter algorithm was not applicable to some portions of the fishnet; (b) The LVS was not applicable under intense sunlight.
Figure 24. Experimental evaluation at an offshore aquaculture installation. (a) The mesh filter algorithm was not applicable to some portions of the fishnet; (b) The LVS was not applicable under intense sunlight.
Robotics 10 00126 g024
Table 1. Dome calibration parameters.
Table 1. Dome calibration parameters.
Model ParametersValues
Focal Length5.485 mm
Dome internal radius44.25 mm
Dome thickness5.75 mm
Air refractive index1.0003
Acrylic refractive index1.4900
Water refractive index1.3333
Camera’s rotation(0.043845, 0.022941, 0.198184) rad
Lens’ position in the dome(3.0, 41.531687, 3.0) mm
Image plane rotation(−0.0012367, −0.001061, −0.011228) rad
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Constantinou, C.C.; Georgiades, G.P.; Loizou, S.G. A Laser Vision System for Relative 3-D Posture Estimation of an Underwater Vehicle with Hemispherical Optics. Robotics 2021, 10, 126. https://doi.org/10.3390/robotics10040126

AMA Style

Constantinou CC, Georgiades GP, Loizou SG. A Laser Vision System for Relative 3-D Posture Estimation of an Underwater Vehicle with Hemispherical Optics. Robotics. 2021; 10(4):126. https://doi.org/10.3390/robotics10040126

Chicago/Turabian Style

Constantinou, Christos C., George P. Georgiades, and Savvas G. Loizou. 2021. "A Laser Vision System for Relative 3-D Posture Estimation of an Underwater Vehicle with Hemispherical Optics" Robotics 10, no. 4: 126. https://doi.org/10.3390/robotics10040126

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop