Next Article in Journal
Multi-Channel Singular Spectrum Analysis on Geocenter Motion and Its Precise Prediction
Previous Article in Journal
The Long-Lasting Story of One Sensor Development: From Novel Ionophore Design toward the Sensor Selectivity Modeling and Lifetime Improvement
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Underwater High-Precision 3D Reconstruction System Based on Rotating Scanning

College of Information Science and Engineering, Ocean University of China, Qingdao 266100, Shandong, China
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(4), 1402; https://doi.org/10.3390/s21041402
Submission received: 17 January 2021 / Revised: 11 February 2021 / Accepted: 12 February 2021 / Published: 17 February 2021
(This article belongs to the Section Intelligent Sensors)

Abstract

:
This paper presents an underwater high-precision line laser three-dimensional (3D) scanning (LLS) system with rotary scanning mode, which is composed of a low illumination underwater camera and a green line laser projector. The underwater 3D data acquisition can be realized in the range of field of view of 50° (vertical) × 360° (horizontal). We compensate the refraction of the 3D reconstruction system to reduce the angle error caused by the refraction of light on different media surfaces and reduce the impact of refraction on the image quality. In order to verify the reconstruction effect of the 3D reconstruction system and the effectiveness of the refraction compensation algorithm, we conducted error experiments on a standard sphere. The results show that the system’s underwater reconstruction error is less than 0.6 mm within the working distance of 140 mm~2500 mm, which meets the design requirements. It can provide reference for the development of low-cost underwater 3D laser scanning system.

1. Introduction

In recent years, three-dimensional (3D) terrain data and scene reconstruction technology have been gradually applied to underwater imaging applications. Due to the disadvantages of traditional acoustic detection, such as it being easily affected by underwater noise, non-intuitive imaging, and poor visualization effect, optical 3D reconstruction technology gradually plays its advantages in underwater operations. For example, in terms of underwater engineering construction [1], laser 3D reconstruction technology can provide more accurate and visualized 3D site surveys for underwater construction projects, used to check the structure of subsea instruments and the wear status of pipelines. In military applications [2], laser 3D reconstruction technology can be used to detect and monitor the underwater targets, providing real-time data support for underwater military rescue. In terms of marine scientific research, laser 3D reconstruction technology can explore seabed resources and map seabed topography [3,4]. In addition, it can also be applied to biological survey [5,6], archaeology [7,8,9,10], sea bottom topography description, etc. [11,12].
Linear laser 3D reconstruction is an active optical measurement technology that belongs to structured light 3D reconstruction technology. The basic principle is that the structured light projector projects a controllable light strip onto the surface of the object to be measured, the image is obtained by the image sensor (such as camera), and the 3D coordinates of the object are calculated by the triangulation through the geometric relationship of the system [13]. Also, there are light spot and light beam laser scanning technology [14]. The light spot type is to scan the object point by point; however, the disadvantage is that image acquisition and post-processing are time-consuming and difficult to complete real-time measurement. Light plane type is to project a two-dimensional plane to an object, and the measurement speed is the fastest. It is usually used to project grating stripes, which is usually used in the air. The linear laser scanning sensor system is composed of a low-light underwater camera, a green line laser projector, and a scanning turntable. The calibration of the system includes the calibration of camera parameters and the calibration of system structure parameters. Through calibration, we can obtain the conversion relationship between the two-dimensional pixel coordinate system and the three-dimensional camera coordinate system, and then obtain the equation of the laser stripe on the plane target in the three-dimensional space. A series of solving and fitting algorithms can finally obtain the plane equation of the light plane in the camera coordinate system. In the early stage of linear laser sensor calibration, the mechanical method was proposed first and widely used. Later, the idea based on the cross ratio invariant method was developed to get rid of the shackles of precision platform in principle. A certain number of high precision calibration points can be obtained by using the specially designed stereo target to complete the calibration [15,16,17,18]. Other methods, such as hidden point, three-point perspective model, active vision, and binocular stereo vision, have certain applications on some specific occasions, greatly enriching the calibration method of the linear structured light sensor [19,20,21,22,23,24].
Michael Bleier et al. developed a laser scanning system with the wavelength of 525 nm to scan the objects in the water tank statically and dynamically. From the obtained point cloud data, the reconstruction errors of the two state systems are less than 5 cm [25]. Jules S. Jaffe [26] invented a line laser scanning imaging system for underwater robots, which effectively reduces backscatter and volume scattering by using large-scale camera splitting, scanning, or pulse systems. Yang Yu et al. [27] studied a multi-channel RGB laser scanning system and proposed a high-resolution underwater true-color three-dimensional reconstruction method with three primary color lasers as the active light source, which can target objects including human faces. Scanning and reconstruction of the millimeter level are performed while restoring the true color texture information of the target. Xu Wangbo et al. [28] designed and implemented an underwater object measurement system based on multi-line structured light. When the measurement distance is 2.45 m, the average error reaches 1.6443 mm, which has high measurement accuracy. At present, many research institutions have done a lot of work on image, point cloud processing, and water scattering correction, and good experimental results have been obtained. However, in the process of underwater experiments, the influence of refraction caused by different media surfaces on the viewing angle error and imaging quality still needs more attention [29].
In order to realize the high-precision 3D reconstruction of underwater target and better utilize the advantages of the optical 3D reconstruction system in underwater operations, this paper designs and develops an underwater high-precision 3D reconstruction system based on rotary scanning, and proposes a refraction compensation method for the system. This article will describe in detail the calibration principle and system structure involved in the reconstruction process in the second and third sections. The specific experimental process will be introduced in Section 4. Then, we will perform error analysis on the standard sphere with known radius, compare the reconstruction errors of the system in two different environments of air and water, and calculate the reconstruction accuracy of underwater objects before and after using the algorithm. The specific work contents are as follows.

2. Determination of System Parameters and Establishment of Light Plane

In the reconstruction of laser three-dimensional scanning, the conversion of the coordinate system and a series of calibration algorithms are involved, including the calibration of the internal and external parameters of the camera, and the determination of the laser plane equation of the system. Through calibration, the two-dimensional pixel coordinates obtained by the camera can be converted into three-dimensional point cloud coordinates. In the process of obtaining the point cloud, we added the refraction correction algorithm of the waterproof device to improve the system reconstruction accuracy.

2.1. Coordinate System Conversion

The calibration process of the system involves the conversion between image coordinate system (including pixel coordinate system and physical coordinate system), camera coordinate system, and world coordinate system. As shown in Figure 1, plane ABC is the laser plane, and f is the camera focal length. We set O W X W Y W Z W as the world coordinate system, O C X C Y C Z C as the camera coordinate system, O u u v as the pixel coordinate system, and O x y as the physical coordinate system.
The relationship between a point P 1 in the world coordinate system and a point P 2 in the camera coordinate system is as follows:
[ X C Y C Z C ] = R [ X W Y W Z W ] + T ,
where R and T are the rotation matrix and translation matrix of the camera coordinate system relative to the world coordinate system.
According to the triangle similarity principle, the relationship between the point P 2 in the camera coordinate system and P 3 in the image coordinate system can be obtained
Z C [ x y 1 ] = [ f 0 0 0 0 f 0 0 0 0 1 0 ] [ X C Y C Z C 1 ]
The conversion relationship between the point in the image coordinate system and the point in the camera coordinate system is shown in the following formula:
[ u v 1 ] = [ 1 d x 0 u 0 0 1 d y v 0 0 0 1 ] [ x y 1 ]
where d x and d y represent the physical size of unit pixel in x-axis and y-axis, respectively. To sum up, the conversion relationship of points from the pixel coordinate system to the world coordinate system can be obtained:
Z C [ u v 1 ] [ 1 d x 0 u 0 0 1 d y v 0 0 0 1 ] [ f 0 0 0 0 f 0 0 0 0 1 0 ] [ R T 0 1 ] [ X W Y W Z W 1 ] = [ f x 0 u 0 0 0 f y v 0 0 0 0 1 0 ] [ R T 0 1 ] [ X W Y W Z W 1 ]

2.2. Camera Calibration

Camera calibration can be divided into traditional camera calibration method and camera self-calibration method [30,31]. The methods that are commonly used include the traditional Tsai calibration method and Zhang Zhengyou calibration method, which is between traditional and self-calibration methods. In this paper, the method we used is the Zhang Zhengyou calibration method. By taking photos of the calibration plate in different directions, two groups of parameters of the camera are directly obtained, which are the external parameters of spatial transformation and the internal parameters of the camera itself. Using the external and internal parameters, the corresponding relationship between the pixel coordinates of the image and the three-dimensional coordinates in the space can be established, that is, the three-dimensional space information can be obtained from the two-dimensional image [32,33]. The Zhang Zhengyou calibration method does not need to know the movement mode of the calibration plate, which avoids high equipment demand and complicated operation, and has higher accuracy and better robustness than the self-calibration method [34,35].
In the process of camera calibration, it is necessary to extract the corner coordinates of the checkerboard plane target. The most commonly used feature detection algorithms in the field of computer vision are Harris, SIFT, SURF, FAST, BRIEF, ORB, etc. The SIFT (Scale Invariant Feature Transform) algorithm proposed by David G. Lowe is based on the feature point extraction of the DoG (Difference of Gaussian) pyramid scale space [36]. The advantages are stable features and rotation invariance and the disadvantage is that the ability to extract feature points for smooth-edge targets is weak. The SURF (Speeded Up Robust Feature) algorithm is an improvement of the SIFT algorithm proposed by David Lowe in 1999 [37], which improves the execution efficiency of the algorithm and provides the possibility for the application of the algorithm in real-time computer vision systems. ORB (Oriented FAST and Rotated BRIEF) is a fast algorithm for feature point extraction and description [38,39]. It was written by Ethan Rublee, Vincent Rabaud, et al. in 2011 entitled “ORB: An Efficient Alternative to SIFT or SURF;” they proposed using the FAST algorithm for feature point extraction, and using the BRIEF algorithm to describe feature points. By combining the advantages of the FAST [40,41] and BRIEF algorithm [42,43] for fast feature point detection and simple description, and improving on them, it solves the complexity of SIFT calculation and the lack of rotation invariance, scale invariance, noise sensitivity, and other shortcomings.
This paper uses the classic Harris corner detection algorithm, which has the advantage of simple calculation and can easily recognize the gray level changes and translational rotation changes of flat images. Experimental verification shows that the Harris algorithm is suitable for the underwater rotating scanning system proposed in this paper, and can extract limited feature points on a planar target. In addition, corner points can also be accurately detected under noise interference, which has high stability and robustness. The basic idea is that the recognition of the human eye corner is usually completed in a local small area or small window [44]. If the small window with this feature is moved in all directions, the gray level of the area in the window changes greatly, and then it is considered that the corner is encountered in the window. The general steps are as follows:
Firstly, calculate the image matrix M , and each pixel of the image is filtered by using the horizontal and numerical difference operators to get the values of I x and I y , and then the four elements in M are obtained:
M = [ I x 2 I x I y I x I y I y 2 ]
Then, the four elements of M are filtered by Gaussian smoothing to get a new M The discrete two-dimensional mean Gaussian function is:
G a u s s = exp ( ( x 2 + y 2 ) 2 σ 2 )
Use M to calculate the amount of corners corresponding to each pixel Cim:
C i m = I x 2 I y 2 ( I x I y ) 2 I x 2 + I y 2
When a point satisfies that Cim is greater than the threshold and is a local maximum in a neighborhood, it can be considered as a corner.
Some of the extraction results are shown in Figure 2, and the pixel coordinates of some corner points of the calibration board are shown in Table 1.

2.3. Calibration of System Structure Parameters

The calibration of sensor structure parameters is the position equation of the laser plane relative to camera. The main calibration methods include the mechanical adjustment method [45], filament scattering method [46,47], and cross ratio invariant method [48]. From the angle of target, it can be divided into the three-dimensional target, plane target, one-dimensional target, and no target [49,50,51,52]. In this paper, a new method of line laser calibration is used. By combining the linear equation formed by the optical center and the point on the light strip and the plane equation of the target in the camera coordinate system, the equation of the line in the camera coordinate system can be obtained. Finally, an equation that describes the plane in the camera coordinate system can be obtained by using the least square method. Let the plane equation be A x + B y + C z = 0 . The specific algorithm flow is shown in Figure 3.

2.4. Refraction Compensation of Underwater 3D Reconstruction System

When working underwater, the instruments are sealed in a waterproof device, so the underwater target and the laser scanning system are separated in media with different refractive indices. When the camera photographs the underwater object, the light passes through the interface of the water-plane glass waterproof cover and the glass waterproof cover-air. After being refracted twice, the object is finally being imaged on the image surface of the camera. According to the points on the image plane of the camera, we perform reverse calculations on the refraction light path to find the intersection point. This intersection point is the actual image point of the measured object after removing the effect of refraction [53,54,55,56]. Since the light is refracted twice at the waterproof cover, and the thickness of the waterproof cover is thin (negligible), the imaging process in the water is simplified as shown in Figure 4, where the plane of the waterproof cover is parallel to the imaging plane of the camera.
In view of the above situation, this section will compensate the system from two aspects. They are the offset compensation of the imaging point on the CCD object surface and the laser plane at the sealing glass.

2.4.1. Refraction Compensation of Light Plane

As shown in Figure 4, the light plane A in the air is refracted by the glass surface to obtain the underwater light plane B. θ 1 is the laser projection angle, θ 2 is the angle between the light plane A and the normal, θ 3 is the is the angle between the normal to A and the normal to C, the θ 4 is the angle between the light plane B and the normal C, and θ 5 is the angle between the normal B and the normal C.
Suppose the normal vector of the refractive surface is ( 0 , 0 , 1 ) , the normal vector of the light plane A is ( A , B , C ) , the normal vector of the light plane B is ( A , B , C ) , and the relative refractive index of water and air is n . From the law of refraction, we can get that:
n = n 1 n 3 = sin θ 1 sin θ 3 = cos θ 3 cos θ 5
where n 1 is the refractive index of air and n 3 is the refractive index of water.
C A 2 + B 2 + C 2 = n C A 2 + B 2 + C 2
Since the normal vector of the light plane B is a unit vector, then:
A 2 + B 2 + C 2 = 1
According to (6), it can be concluded that:
C = C n A 2 + B 2 + C 2
[ A B C ] T = x [ A B C ] T + y [ 0 0 1 ] T
According to the simultaneous Formulas (11) and (12), it can be concluded that:
{ A = A 2 n 2 ( A 2 + B 2 + C 2 ) A 2 C 2 n 2 ( A 2 + B 2 + C 2 ) ( A 2 + B 2 ) B = A B A
The coordinates of intersection point D between light plane A and light plane B and glass surface are ( 0 , H tan θ 1 , H ) , where H is the distance from camera optical center to glass surface, and the equation of light plane B is as follows:
A x + B y + C z + D = 0
Substituting the coordinate of point d into (11), we can get the following results:
{ B H tan θ + C H + D = 0 B H tan θ + C H + D = 0
After simplification, we can get the following conclusions:
D = ( B tan θ + C H ) = B C H + D B C H
In conclusion, the laser plane equation after refraction can be calculated:
Ax + By + Cz + D’ = 0.

2.4.2. CCD Image Coordinate Refraction Compensation

It can be seen from the Figure 4 that the imaging point of the underwater target P w on the image plane is P ( u , v ) . If it is in the air, the direct imaging point is P ( u , v ) . There is a relationship between P and P :
( u , v ) = η ( u , v )
For each measured point, refraction correction can be realized as long as it is obtained.
η = tan θ 7 tan θ 6 = ( E F tan θ 8 + G F ) / O C F I J / O C J
and because:
I J = u 2 + v 2
O C J = f
where f is the focal length of the camera, point P ( x c , y c , z c ) is known; when a condition is satisfied Z W H , the results are as follows:
η = ( 1 H Z C ) tan θ 8 + H Z C tan θ 6 u 2 + v 2 / f tan θ 8 u 2 + v 2 / f
Substituting the results of refraction compensation into the camera calibration program and coordinate conversion formula, the compensated three-dimensional coordinates can be obtained.

3. Rotary Scanning System

Linear laser 3D rotation system mainly includes optical system and mechanical structure. The optical system is composed of laser, camera, and LED, as it shown in Figure 5.
The distance between the camera and the laser is 250 mm, and they are, respectively, installed in two independent watertight chambers, fixed on both sides of the support frame to maintain their relative position. The turntable is controlled by the host computer, and the system can scan at a fixed position within the range of 50° × 360°. The specific model of the system is in Figure 5. Some parameters of camera, laser and system performance are shown in Table 2:

4. Performance Tests

4.1. Three-Dimensional Reconstruction in Air

We choose a checkerboard with a side length of 25 mm as the calibration target. By changing the angle of the rotation and tilt of the target, 12 pictures of the checkerboard in different poses under the camera coordinate system can be obtained. Through Zhang Zhengyou’s calibration method, the camera’s internal parameters and calibration results include external parameters and distortion coefficients. The whole calibration process includes the original image acquisition, chessboard corner extraction, reprojection error, and other steps. The result is shown in Figure 6.
We put the fixture in the air for a scanning experiment. Through the coordinate transformation and the solution of the light plane equation, we can finally get the three-dimensional point cloud coordinates of the fixture. After noise reduction, the final three-dimensional point cloud of the fixture is shown in the Figure 7b.

4.2. 3D Reconstruction of Underwater Target

From the above experimental results, the system can complete the 3D point cloud coordinate extraction of the water target, and reconstruct the characteristics of the object, which verifies the feasibility of the system in the air. The next step is to put the scanning system in water and calibrate it again. It is known that light will refract on passing through different media, and underwater calibration can better reduce the influence of laser light refraction on the reconstruction accuracy, and can better scan the conch in the water to verify the reconstruction effect of the underwater system. First, we calibrate the chessboard target underwater, as shown in Figure 8.
After calibration, the internal parameter matrix of the camera is as follows:
Intrinsic   Matrix = [ 12329 0 0 0 12290 0 1314 748 1 ]
Other related parameters include camera focal length, principal point, image size, radial distortion, and tangential distortion, as shown in the Table 3 below:
After getting the calibration results of the system structure parameters, we fit the laser plane projected by the laser. Finally, the laser plane equation is obtained as follows:
Z = 1498.888393 + 0.019092 x + 0.1281023 y
The parameters corresponding to the light plane equation A x + B y + C z = 0 in Section 2.3 are: A = 0.019092; B = 0.128103; C = 1; D = 1498.888393.
We selected the conch as the scanning object, as shown in the Figure 9a. The 3D point cloud information of a single frame image can be obtained by converting the two-dimensional image data into three-dimensional point cloud coordinates through system parameter calibration. According to the motion mode of the turntable, the three-dimensional point cloud data can be spliced. Finally, the laser scanning results are obtained. The three-dimensional point cloud image of conch is as follows:

4.3. Error Analysis

In order to measure the accuracy of the system, a standard ball with a radius of 20.01 mm is scanned and reconstructed in the range of 500 mm to 1200 mm. The two-dimensional coordinates of the scanned object surface are obtained by using the laser stripe center extraction algorithm, and then the three-dimensional point cloud coordinates of the standard sphere are obtained by using the coordinate transformation formula. The standard sphere radius can be obtained by calculation, and the fringe extraction result is shown in Figure 10.
The reconstruction results in air and underwater are shown in Figure 11. The average measurement error of the standard sphere in the air is less than 0.16 mm, and the average error after refraction corrected in water is less than 0.6 mm. In addition, we can also conclude that as the measurement distance increases, the error gradually increases.
In order to verify the effectiveness of the underwater refraction correction algorithm, we use different methods to carry out underwater measurement experiments, for example, we select a standard ball with a radius of 20.01 mm as the measured object, select 12 different positions in the field of view for measurement, and calculate the measurement radius of the standard ball without refraction correction algorithm and with refraction correction algorithm, respectively. R is the underwater measurement radius without the refraction correction algorithm, and R c is the measurement radius with the refraction correction. The detailed measurement results are shown in Table 4:
It can be seen from Table 4 that the measurement error of standard ball is within 7.1 mm without refraction correction, and the maximum measurement error is 7.099 mm. The average radius of standard sphere is 25.780 mm. After adding refraction correction, the maximum error of reconstructed standard sphere radius is 0.596 mm, and the error range is within 0.6 mm. It can be seen that the reconstruction accuracy of the system has been significantly improved after adding the refraction correction algorithm. It is proved that the algorithm proposed in this paper is accurate and effective.

5. Conclusions

In this paper, an underwater laser 3D scanning system based on rotation scanning is proposed, which is composed of line laser, underwater camera and turntable, realizing rotation scanning in the range of 50 ° (vertical) × 360 ° (horizontal). In order to achieve high-precision and high-resolution reconstruction results, this paper proposes a refraction correction algorithm for watertight devices, carries out three-dimensional reconstruction experiments on underwater targets, and obtains three-dimensional point clouds. Finally, in order to verify the feasibility of the system and the effectiveness of the algorithm, we use a standard ball with a radius of 20.01 mm for error analysis, showing that the underwater 3D reconstruction error of the system is less than 0.6 mm, which further proves that the line laser scanning system with low cost and simple structure can replace the expensive professional optical depth sensor and provide a new reference for underwater 3D laser reconstruction technology.

Author Contributions

Conceptualization, Q.X. and Q.S.; methodology, Q.X. and Q.S.; software, H.B.; validation, Q.S.; writing—original draft preparation, Q.S.; writing—review and editing, F.W., B.Y. and Q.L.; project administration, Q.X.; funding acquisition, Q.X. All authors modified and approved the final submitted materials. All authors have read and agreed to the published version of the manuscript.

Funding

National Defense Scientific Research Joint Cultivation Project (202065004); Jilin Scientific and Technological Development Program (20190302083GX); Key deployment project of the Marine Science Research Center of the Chinese Academy of Sciences (COMS2019J04); National Key Research and Development Program of China (2019YFC1408300, 2019YFC1408301); National Natural Science Foundation of China-Shandong Joint Fund (U2006209); Natural Science Foundation of China (52001295).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Acknowledgments

The authors would like to appreciate the technical editor and anonymous reviewers for their constructive comments and suggestions on this study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Coiras, E.; Petillot, Y.; Lane, D.M. Multiresolution 3-D reconstruction from side-scan sonar images. IEEE Trans. Image Process. 2007, 16, 382–390. [Google Scholar] [CrossRef] [Green Version]
  2. He, X.; Tong, N.; Hu, X. High-resolution imaging and 3-D reconstruction of precession targets by exploiting sparse apertures [J]. IEEE Trans. Aerosp. Electron. Syst. 2017, 53, 1212–1220. [Google Scholar] [CrossRef]
  3. Lirman, D.; Gracias, N.; Gintert, B.; Gleason, A.C.; Deangelo, G.; Dick, M.; Martinez, E.; Reid, R.P. Damage and recovery assessment of vessel grounding injuries on coral reef habitats by use of georeferenced landscape video mosaics. Limnol. Oceanogr. Methods 2010, 8, 88–97. [Google Scholar] [CrossRef]
  4. Schjølberg, I.; Gjersvik, T.B.; Transeth, A.A.; Utne, I.B. Next generation subsea inspection, Maintenance and Repair Operations. IF AC PapersOnLine 2016, 49, 434–439. [Google Scholar]
  5. Gomes, L.; Bellon, O.R.P.; Silva, L. 3D reconstruction methods for digital preservation of cultural heritage: A survey. Pattern Recognit. Lett. 2014, 50, 3–14. [Google Scholar] [CrossRef]
  6. Tetlow, S.; Allwood, R.L. The use of a laser stripe illuminator for enhanced underwater viewing. Proc. SPIE 1994, 2258, 547–555. [Google Scholar]
  7. Meline, A.; Triboulet, J.; Jouvencel, B. Comparative study of two 3D reconstruction methods for underwater archaeology. In Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Algarve, Portugal, 7–12 October 2012; pp. 740–745. [Google Scholar]
  8. Eric, M.; Kovacic, R.; Berginc, G.; Pugelj, M.; Stopinsek, Z.; Solina, F. The impact of the latest 3D technologies on the documentation of underwater heritage sites. In Proceedings of the IEEE Digital Heritage International Congress, Marseille, France, 28 October–1 November 2013; pp. 281–288. [Google Scholar]
  9. Bruno, F.; Gallo, A.; Muzzupappa, M.; Daviddde Petriaggi, B.; Caputo, P. 3D documentation and monitoring of the experimental cleaning operations in the underwater archaeological site of Baia (Italy). In Proceedings of the IEEE Digital Heritage International Congress, Marseille, France, 28 October–1 November 2013; pp. 105–112. [Google Scholar]
  10. Skarlatos, D. Project iMARECULTURE: advanced VR, iMmersive serious games and augmented REality as tools to raise awareness and access to European underwater CULTURal heritagE. In Proceedings of the Euro-Mediterranean Conference, Nicosia, Cyprus, 31 October–5 November 2016. [Google Scholar]
  11. Johnson-Roberson, M.; Bryson, M.; Friedman, A.; Pizarro, O.; Troni, G.; Ozog, P.; Henderson, J.C. High-resolution underwater robotic vision-based mapping and three-dimensional reconstruction for archaeology. J. Field Robot. 2017, 34, 625–643. [Google Scholar] [CrossRef] [Green Version]
  12. Ulrik, J.; Skjetne, R. Real-time 3D Reconstruction of Underwater Sea-ice Topography by Observations from a Mobile Robot in the Arctic. Ifac Proc. Vol. 2013, 46, 310–315. [Google Scholar]
  13. Norström, C. Underwater 3-D imaging with laser triangulation. Available online: https://www.diva-portal.org/smash/get/diva2:21659/FULLTEXT01.pdf (accessed on 12 February 2021).
  14. Usamentiaga, R.; Molleda, J.; Garcia, D.F. Fast and robust laser stripe extraction for 3D reconstruction in industrial environments. Mach. Vis. Appl. 2012, 23, 179–196. [Google Scholar] [CrossRef]
  15. Bruno, F.; Bianco, G.; Muzzupappa, M.; Barone, S.; Razionale, A. Experimentation of structured light and stereo vision for underwater 3D reconstruction. ISPRS J. Photogramm Remote Sens. 2011, 66, 508–518. [Google Scholar]
  16. Bianco, G.; Gallo, A.; Bruno, F.; Muzzupappa, M. A comparative analysis between active and passive techniques for underwater 3D reconstruction of close-range objects. Sensors 2013, 13, 11007–11031. [Google Scholar] [CrossRef] [PubMed]
  17. Salvi, J.; Fernandez, S.; Pribanic, T.; Llado, X. A state of the art in structured light patterns for surface profilometry. Pattern Recognit. 2010, 43, 2666–2680. [Google Scholar] [CrossRef]
  18. Bradski, G.; Kaehler, A. Camera Model and Calibration. In Learning OpenCV, 1st ed.; Kaiqi, W., Ed.; Tsinghua University Press: Beijing, China, 2009; pp. 427–432. [Google Scholar]
  19. Miguel, C.; Palomer, A.; Forest, J.; Ridao, P. State of the art of underwater active optical 3D Scanners. Sensors 2019, 19, 51–61. [Google Scholar]
  20. Zhang, G.J.; Wei, Z.Z. A novel calibration approach to structured light 3D vision inspection. Opt. Laser Technol. 2002, 34, 373–380. [Google Scholar] [CrossRef]
  21. Ben-Hamadou, A. Flexible calibration of structured-light systems projecting point patterns. Comput. Vis. Image Underst. 2013, 117, 1468–1481. [Google Scholar] [CrossRef]
  22. Yang, R.Q.; Sheng, C.; Chen, Y.Z. Flexible and accurate implementation of a binocular structured light system. Opt. Lasers Eng. 2008, 46, 373–379. [Google Scholar] [CrossRef]
  23. Ya, Q.L.I. Underwater dense stereo matching based on depth constraint. Acta Photonica Sinica 2017, 46, 715001. [Google Scholar]
  24. Zhuang, S.F. A standard expression of underwater binocular vision for stereo matching. Meas. Sci. Technol. 2020, 31, 115012. [Google Scholar] [CrossRef]
  25. Bleier, M.; Lucht, J.V.D.; Nüchter, A. Scout3D-an underwater laser scanning system for mobile mapping. Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 2019, 42, 13–18. [Google Scholar] [CrossRef] [Green Version]
  26. Jaffe, J. Development of a Laser Line Scan Lidar Imaging System for AUV Use. Available online: https://agris.fao.org/agris-search/search.do?recordID=AV2012084496 (accessed on 12 February 2021).
  27. Yang, Y. Research on Underwater Multi-Channel True Color 3D Reconstruction and Color Reduction Method; Ocean University of China: Qingdao, China, 2014. [Google Scholar]
  28. Xu, W.B. Underwater Geomorphology Detection Based on Structured Light; North China University of Water Resources and Hydropower: Zhengzhou, China, 2018. [Google Scholar]
  29. Shortis, M.R. Calibration techniques for accurate measurements by underwater camera systems. Sensors 2015, 15, 30810–30827. [Google Scholar] [CrossRef] [Green Version]
  30. Usamentiaga, R.; Garcia, D.F. Multi-camera calibration for accurate geometric measurements in industrial environments. Measurement 2019, 134, 45–358. [Google Scholar] [CrossRef]
  31. Agrafiotis, P.; Drakonakis, G.I.; Skarlatos, D. Underwater Image Enhancement before Three-Dimensional (3D) Reconstruction and Orthoimage Production Steps: Is It Worth? In Latest Developments in Reality-Based 3D Surveying and Modelling; 2018. [Google Scholar]
  32. Zhang, Z.Y. Camera Calibration with One-Dimensional Objects. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 7, 892–899. [Google Scholar] [CrossRef]
  33. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 11, 1330–1334. [Google Scholar] [CrossRef] [Green Version]
  34. Penna, M. Camera calibration: a quick and easy way to detection the scale factor. IEEE Trans. Pattern Anal. Mach. Intell. 1991, 13, 1240–1245. [Google Scholar] [CrossRef]
  35. Rufli, M.; Scaramuzza, D.; Siegwart, R. Automatic detection of checkerboards on blurred and distorted images in Intelligent Robotsand Systems. In Proceedings of the 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, Nice, France, 22–26 September 2008; pp. 3121–3126. [Google Scholar]
  36. Lowe, D.G. Object recognition from local scale-invariant features. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; pp. 1150–1157. [Google Scholar]
  37. Bay, H. Speeded-up robust features (SURF). Comput. Vis. image Underst. 2008, 110, 346–359. [Google Scholar] [CrossRef]
  38. Rublee, E. ORB: An efficient alternative to SIFT or SURF. In Proceedings of the 2011 International conference on computer vision, Barcelona, Spain, 6–13 November 2011; pp. 2564–2571. [Google Scholar]
  39. Makantasis, K. In the wild image retrieval and clustering for 3D cultural heritage landmarks reconstruction. Multimed. Tools Appl. 2016, 75, 3593–3629. [Google Scholar] [CrossRef]
  40. Tuzel, O.; Fatih, P.; Peter, M. Region covariance: A fast descriptor for detection and classification. In Proceedings of the European conference on computer vision, Berlin/Heidelberg, Germany, 7–13 May 2006; pp. 589–600. [Google Scholar]
  41. Tola, E.; Vincent, L.; P, F. A fast local descriptor for dense matching. In Proceedings of the 2008 IEEE conference on computer vision and pattern recognition, Anchorage, AK, USA, 22–24 October 2008; pp. 1–8. [Google Scholar]
  42. Calonder, M. BRIEF: Computing a local binary descriptor very fast. IEEE Trans. pattern Anal. Mach. Intell. 2011, 34, 1281–1298. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  43. Calonder, M. Brief: Binary robust independent elementary features. In Proceedings of the European conference on computer vision, Berlin/Heidelberg, Germany, 5–11 November 2010; pp. 778–792. [Google Scholar]
  44. Harris, C.; Stephens, M. A combined corner and edge detector. In Proceedings of the Alvey vision conference, Manchester, UK, 15–17 September 1988. [Google Scholar]
  45. Zhang, M.; Li, D.H. An on-site calibration technique for line structured light 3D scanner. In Proceedings of the 2009 Asia-Pacific Conference on Computational Intelligence and Industrial Applications (PACIIA), Wuhan, China, 28–29 November 2009; pp. 30–33. [Google Scholar]
  46. Dewar, R. Self-generated targets for spatial calibration of structured light optical sectioning sensors with respect to an external coordinate system. In Proceedings of the Robots and Vision’88 Conf. Proceedings, Detroit, MI, USA, 5–9 June 1988; pp. 5–13. [Google Scholar]
  47. James, K.W. Noncontact machine vision metrology within a CAD coordinate system. In Proceedings of the Autofact’88Conf.Proceedings, Chicago, IL, USA, 10–16 December 1988; pp. 9–17. [Google Scholar]
  48. Zhang, Y.B.; Lu, R.S. Calibration method of linear structured light vision measurement system. Meas. Syst. World 2003, 8, 10–13. [Google Scholar]
  49. Ying, J.G.; Li, Y.J.; Ye, S.H. Fast calibration method of line structured light sensor based on coplanar calibration reference. China Mech. Eng. 2006, 17, 183–186. [Google Scholar]
  50. Liu, Z.; Zhang, G.J.; Wei, Z.Z.; Jiang, J. A field calibration method for high precision line structured light vision sensor. Acta Optica Sinica 2009, 29, 3124–3128. [Google Scholar]
  51. Zhou, F.Q.; Cai, F.H. Calibration of structured light vision sensor based on one dimensional target. Chin. J. Mech. Eng. 2010, 46, 8–12. [Google Scholar] [CrossRef]
  52. Chen, T.F.; Ma, Z.; Wu, X. Light plane in structured light sensor based on active vision calibration line. Opt. Precis. Eng. 2012, 20, 257–263. [Google Scholar]
  53. Yau, T.; Gong, M.L.; Yang, Y.H. Underwater camera calibration using wavelength triangulation. In Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013; pp. 2499–2506. [Google Scholar]
  54. Agrawal, A.; Ramalingam, S.; Taguchi, Y. A theory of multi-layer flat refractive geometry. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, New York, NY, USA, 16–21 June 2012; pp. 3346–3353. [Google Scholar]
  55. Treibitz, T.; Schechner, Y.; Kunz, C.; Singh, H. Flat refractive geometry. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 51–65. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  56. Xie, Z.X.; Yu, J.S.; Chi, S.K.; Li, J.P.; Li, M.H. Underwater calibration and measurement of nonparallel binocular vision system. Acta Optica Sinica 2019, 39, 195–204. [Google Scholar]
Figure 1. Geometric relations in coordinate system transformation.
Figure 1. Geometric relations in coordinate system transformation.
Sensors 21 01402 g001
Figure 2. Some corners extracted by Harris algorithm.
Figure 2. Some corners extracted by Harris algorithm.
Sensors 21 01402 g002
Figure 3. Flow chart of laser plane equation calibration.
Figure 3. Flow chart of laser plane equation calibration.
Sensors 21 01402 g003
Figure 4. Refraction caused by watertight devices.
Figure 4. Refraction caused by watertight devices.
Sensors 21 01402 g004
Figure 5. Mechanical model of the rotating scanning system. (a) Optical system; (b) System mechanical structure with variable included angle; (c) Rotating stage; (d) The overall model of the system.
Figure 5. Mechanical model of the rotating scanning system. (a) Optical system; (b) System mechanical structure with variable included angle; (c) Rotating stage; (d) The overall model of the system.
Sensors 21 01402 g005aSensors 21 01402 g005b
Figure 6. Camera calibration process in the air and some corner points extraction results. (a) Twelve Calibration boards with different attitude; (b) Corner extraction results of image.
Figure 6. Camera calibration process in the air and some corner points extraction results. (a) Twelve Calibration boards with different attitude; (b) Corner extraction results of image.
Sensors 21 01402 g006
Figure 7. Three-dimensional (3D) reconstruction point cloud in air. (a) Measured object in the air; (b) Point cloud of object.
Figure 7. Three-dimensional (3D) reconstruction point cloud in air. (a) Measured object in the air; (b) Point cloud of object.
Sensors 21 01402 g007
Figure 8. Underwater chessboard calibration. (a) Checkerboard calibration; (b) Corner extraction of underwater chessboard.
Figure 8. Underwater chessboard calibration. (a) Checkerboard calibration; (b) Corner extraction of underwater chessboard.
Sensors 21 01402 g008
Figure 9. 3D reconstruction results of underwater conch. (a) Conch placed in the water; (b) 3D point cloud of conch.
Figure 9. 3D reconstruction results of underwater conch. (a) Conch placed in the water; (b) 3D point cloud of conch.
Sensors 21 01402 g009
Figure 10. Measurement value and error of standard ball at different distances. (a) Reconstruction of standard sphere in air; (b) Extraction of laser stripe center in air; (c) Reconstruction of standard sphere in water; (d) Stripe center extraction of underwater standard sphere.
Figure 10. Measurement value and error of standard ball at different distances. (a) Reconstruction of standard sphere in air; (b) Extraction of laser stripe center in air; (c) Reconstruction of standard sphere in water; (d) Stripe center extraction of underwater standard sphere.
Sensors 21 01402 g010
Figure 11. Relationship between the measured value of standard error sphere radius and the measured distance. (a) Reconstruction error results in water and air; (b) Reconstruction of standard ball diameter at different distances.
Figure 11. Relationship between the measured value of standard error sphere radius and the measured distance. (a) Reconstruction error results in water and air; (b) Reconstruction of standard ball diameter at different distances.
Sensors 21 01402 g011
Table 1. Pixel coordinates of some corners of calibration board.
Table 1. Pixel coordinates of some corners of calibration board.
Indexu(pix.)v(pix.)
11148.4655761012.713867
2800.386447461893.737305
3497.8028871882.058228
4193.60522461868.337646
5497.8028871882.058228
6264.0452271363.3004456
Table 2. Mechanical model of rotary scanning system.
Table 2. Mechanical model of rotary scanning system.
Performance
Measuring MethodTriangulation
Scan RangeMinimum: 0.14 m | Maximum: 2.5 m
Field of View360°Pan 90°Tilt
Operating Temperature-10℃–40℃
Pan and Roll Accuracy0.1°
Pitch and Roll Accuracy±0.25°
Laser
Wavelength520 nm
Electric Current≤300 mA
Power Supply50 mW
Camera
ModelWP-UT530M
Pixel Size4.8 × 4.8 μm
Resolution2592×2048
Frame Rate60 FPS
Exposure Time16 μs–1 s
Spectral Range380–650 nm
Table 3. Some parameters of the system.
Table 3. Some parameters of the system.
ParameterResult (Pix.)
Focal Length[ 1.2329 × 10 4 1.229 × 10 4 ]
Principal Point[ 1.3135 × 10 3 7.4849 × 10 2 ]
Image Size[2048 2592]
Radial Distortion[−0.1380 15.7017]
Tangential Distortion[0 0]
Table 4. Measurement result and error of standard ball.
Table 4. Measurement result and error of standard ball.
Distance(mm)R(mm)Error(mm) R C ( mm ) E rror c ( mm )
50024.5414.53120.1170.107
55024.9954.98520.0910.081
60025.3365.32620.1950.185
65025.3655.35520.2330.223
70024.3794.36920.2950.285
75025.525.5119.7420.268
80025.9935.98320.3470.337
85026.2386.22819.6460.364
90026.8626.85220.4680.458
95025.9995.98920.5030.493
100027.1097.09920.5830.573
120027.0227.01219.4140.596
Average25.7805.77020.1360.331
Max27.1097.09920.5830.596
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Xue, Q.; Sun, Q.; Wang, F.; Bai, H.; Yang, B.; Li, Q. Underwater High-Precision 3D Reconstruction System Based on Rotating Scanning. Sensors 2021, 21, 1402. https://doi.org/10.3390/s21041402

AMA Style

Xue Q, Sun Q, Wang F, Bai H, Yang B, Li Q. Underwater High-Precision 3D Reconstruction System Based on Rotating Scanning. Sensors. 2021; 21(4):1402. https://doi.org/10.3390/s21041402

Chicago/Turabian Style

Xue, Qingsheng, Qian Sun, Fupeng Wang, Haoxuan Bai, Bai Yang, and Qian Li. 2021. "Underwater High-Precision 3D Reconstruction System Based on Rotating Scanning" Sensors 21, no. 4: 1402. https://doi.org/10.3390/s21041402

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop