Next Article in Journal
Ultrasensitive Detection of Testosterone Using Microring Resonator with Molecularly Imprinted Polymers
Next Article in Special Issue
Underwater 3D Surface Measurement Using Fringe Projection Based Scanning Devices
Previous Article in Journal
Robust 3D Position Estimation in Wide and Unconstrained Indoor Environments
Previous Article in Special Issue
Adjustment of Sonar and Laser Acquisition Data for Building the 3D Reference Model of a Canal Tunnel
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optical Sensors and Methods for Underwater 3D Reconstruction

by
Miquel Massot-Campos
* and
Gabriel Oliver-Codina
Department of Mathematics and Computer Science, University of the Balearic Islands, Cra de Valldemossa km 7.5, Palma de Mallorca 07122, Spain
*
Author to whom correspondence should be addressed.
Sensors 2015, 15(12), 31525-31557; https://doi.org/10.3390/s151229864
Submission received: 7 September 2015 / Revised: 2 December 2015 / Accepted: 4 December 2015 / Published: 15 December 2015

Abstract

:
This paper presents a survey on optical sensors and methods for 3D reconstruction in underwater environments. The techniques to obtain range data have been listed and explained, together with the different sensor hardware that makes them possible. The literature has been reviewed, and a classification has been proposed for the existing solutions. New developments, commercial solutions and previous reviews in this topic have also been gathered and considered.

Graphical Abstract

1. Introduction

The exploration of the ocean is far from being complete, and detailed maps of most of the undersea regions are not available, although necessary. These maps are built collecting data from different sensors, coming from one or more vehicles. These gathered three-dimensional data enable further research and applications in many different areas with scientific, cultural or industrial interest, such as marine biology, geology, archeology or off-shore industry, to name but a few.
In recent years, 3D imaging sensors have increased in popularity in fields such as human-machine interaction, mapping and movies. These sensors provide raw 3D data that have to be post-processed to obtain metric 3D information. This workflow is known as 3D reconstruction, and nowadays, it is seen as a tool that can be used for a variety of applications, ranging from medical diagnosis to photogrammetry, heritage reports or machinery design and production [1,2]. Thanks to recent advances in science and technology, large marine areas, including deep sea regions, are becoming accessible to manned and unmanned vehicles; thus, new data are available for underwater 3D reconstruction.
Due to readily-available off-the-shelf underwater camera systems, but also to custom-made systems in deep-sea robotics, an increasing number of images and video are captured underwater. Using the recordings of an underwater excavation site, scientists are now able to obtain accurate 2D or 3D representations and interact with them using standard software. This software allows the scientist to add measurements, annotations or drawings to the model, creating graphic documents. These graphic documents help to understand the site by providing a comprehensive and thematic overview and interface with data entered by experts (pilots, biologists, archaeologists, etc.), allowing reasonable access to a set of heterogeneous data [3].
Most 3D sensors developed are designed to operate in air conditions, but the focus of this paper is in the 3D reconstruction of underwater scenes and objects for archeology, seafloor mapping and structural inspection. This data gathering can be performed from a deployed sensor (e.g., from an underwater tripod or a fixed asset), operated by a diver or carried by a towed body, a remotely-operated vehicle (ROV) or an autonomous underwater vehicle (AUV).
Other authors have already reviewed some topics previously mentioned, for example Jaffe et al. [4] surveyed in 2001 the different prospects in underwater imaging, foreseeing the introduction of blue-green lasers and multidimensional photomultiplier tube (PMT) arrays. An application of these prospects is shown in Foley and Mildell [5], who covered in 2002 the technologies for precise archaeological surveys in deep water, such as image mosaicking and acoustic three-dimensional bathymetry.
In [6], Kocak et al. outlined the advances in the field of underwater imaging from 2005 to 2008, basing their work on a previous survey [7]. Caimi et al. [8] wrote their survey in 2008 on underwater imaging, as well, and summarized different extended range imaging techniques, as well as spatial coherency and multi-dimensional image acquisition. Years later, Bonin et al. [9] surveyed in 2011 different techniques and methods to build underwater imaging and lighting systems. Finally, in 2013, Bianco et al. [10] compared structured light and passive stereo, focusing on close-range 3D reconstruction of objects for the documentation of submerged heritage sites.
Structure from motion and stereoscopy are also studied by Jordt [11], who reported in her PhD thesis (2014) different surveys on 3D reconstruction, image correction calibration and mosaicking.
In this survey, we present a review of optical sensors and associated methods in underwater 3D reconstruction. LiDAR, stereo vision (SV), structure from motion (SfM), structured light (SL), laser stripe (LS) and laser line scanning (LLS) are described in detail, and features, such as range, resolution, accuracy and ease of assembly, are given for all of them, when available. Despite sonar sensors being acoustic, a concise summary is also given due to their extended use in underwater, and figures are presented to be compared to optical systems.
This article is structured as follows: Section 2 presents the underwater environment and its related issues. Section 3 reviews the measuring methods to gather 3D data. Section 4 evaluates the literature and the different types of sensors and technologies. Section 5 shows some commercial solutions, and finally, in Section 6, conclusions are drawn.

2. The Underwater Environment

Underwater imaging [12] has particular characteristics that distinguishes it from conventional systems, which can be summarized as follows:
(1)
Limited on-site accessibility, which makes the deployment and operation of the system difficult [13].
(2)
Poor data acquisition control, frequently implemented by divers or vehicle operators untrained for this specific task [14].
(3)
Insufficient illumination and wavelength-dependent light absorption, producing dim and monotone images [15]. Light absorption also causes darkening on image borders, an effect somewhat similar to vignetting.
(4)
Water-glass-air interfaces between the sensor and the scene, modifying the intrinsic parameters of the camera and limiting the performance of the image processing algorithms [16,17,18], unless specific calibration is carried out [19,20].
(5)
Significant scattering and light diffusion that limits the operational distance of the systems.
These distinguishing traits will affect the performance of underwater imaging systems. Particular attention is paid to the typical range, resolution and/or accuracy parameters for the systems discussed in the next sections.
Additionally, images taken in shallow waters (<10 m) can be seriously affected by flickering, which produces strong light fluctuations due to the sunlight refraction on a waving air-water interface. Flickering generates quick changes in the appearance of the scene, making basic image processing functions, like feature extraction and matching, which are frequently used by mapping software [21], more difficult. Although some solutions to this problem can be found in the literature [22], flickering is still a crucial issue in many submarine scenarios.

2.1. Underwater Camera Calibration

Camera calibration was first studied in photogrammetry [23], but it has also been widely studied in computer vision [24,25,26,27]. The use of a calibration pattern or set of markers is one of the most reliable ways to estimate a camera’s intrinsic parameters [28]. In photogrammetry, it is common to set up a camera in a large field looking at distant calibration patterns or targets whose exact location, size and shape are known.
Camera calibration is a major problem connected with underwater imaging. As mentioned earlier, refraction caused by the air-glass-water interface results in high distortion on images, and it must be taken into consideration during the camera calibration process [29]. This refraction occurs due to the difference in density between two media. As seen in Figure 1, the incident light beam passes through two media changes, modifying the light path.
Figure 1. Refraction caused by the air-glass (acrylic)-water interface. The extension of the refracted rays (dashed lines) into air leads to several intersection points, depending on their incidence angles and representing multiple apparent viewpoints. Because of refraction, there is no collinearity between the object point in water, the center of projection of the camera and the image point [20].
Figure 1. Refraction caused by the air-glass (acrylic)-water interface. The extension of the refracted rays (dashed lines) into air leads to several intersection points, depending on their incidence angles and representing multiple apparent viewpoints. Because of refraction, there is no collinearity between the object point in water, the center of projection of the camera and the image point [20].
Sensors 15 29864 g001
According to Figure 1, the incident and emergent angles suffice for Snell’s Law, e.g.:
r A G = sin θ 3 sin θ 2 = n G n A = 1 . 49 , θ 3 > θ 2
r G W = sin θ 2 sin θ 1 = n W n G = 0 . 89 , θ 2 < θ 1
where r A G is the refractive index between air and glass interfaces and r G W is the refractive index between glass and water interfaces (for water n W = 1 . 33 at 20 C [30], for acrylic glass n G = 1 . 49 [31]).
If we replace Equation (2) in Equation (1),
sin θ 3 sin θ 1 = n W n A = 1 . 33 θ 3 > θ 1
Therefore, the emergent angle θ 3 is bigger than the incident angle θ 1 , causing the imaged scene to look wider than it is [14]. For planar interfaces, the deformation increases according to the distance from the center pixel of the camera, called pin-cushion distortion.
Changes in pressure, temperature and salinity alter the refraction index of water and even the camera handling, modifying the calibration parameters [32]. As a result, there is a mismatch between object-plane and image-plane coordinates. This problem has been addressed in two different ways: (1) developing new calibration algorithms that have refraction correction capability [29]; and (2) modifying existing algorithms to reduce the error due to refraction [33]. Other approaches, such as the one reported by Kang et al. [34], solve the structure and motion problem taking refraction into consideration.
According to Kwon [29], the refraction error caused by two different media can be reduced by considering radial distortion. Consequently, standard photogrammetric calibration software to calibrate the digital cameras and their housing can be used.

3. Measuring Methods

Sensors for three-dimensional measurement can be classified into three major classes depending on the measuring method: triangulation, time of flight and modulation. A sensor can belong to more than one class, which means that it uses different methods or a combination of them to obtain three-dimensional data, as depicted in Figure 2.
Figure 2. 3D reconstruction sensor classification.
Figure 2. 3D reconstruction sensor classification.
Sensors 15 29864 g002
There is also another traditional classification method for sensing devices, active or passive, depending on how they interact with the medium. All of the methods in Figure 2 are active, except for passive imaging.
Active sensors are those that either illuminate, project or cast a signal with respect to the environment to help, enhance or measure the data to gather. An example of an active system is structured light, where a pattern is projected onto the object to reconstruct.
However, according to Bianco [10], those systems using artificial light sources, that are used just to illuminate the scene, but not for the triangulation of the 3D points, are considered passive.
Passive methods sense the environment with no alteration or change of the scene. An example of that is structure from motion, where image features are matched between different camera shots for a post-processed 3D triangulation. Camera-based sensors are the only ones that can be passive for 3D reconstruction, as the others are based on sound or on light projection.

3.1. Time of Flight

Time discrimination methods are based on controlling the travel time of the signal. By knowing the speed of the signal in the medium where it travels, the distance can be drawn. These methods achieve somewhat long distances, especially sonar, but in that case, extra care should be taken to prevent the measures from being affected by alterations in the sound speed, caused by water temperature, salinity and pressure changes.
At short distances, a small inaccuracy in the time measure can cause a great relative error in the result. Furthermore, some sensors require a minimum distance at which they can measure depending on their geometry.
Sonar, LiDAR and pulse gated laser line scanning (PG-LLS) are some examples of sensors using this principle to acquire 3D data.

3.2. Triangulation

Triangulation methods are based on measuring the distance from two or more devices (either signal sources or receivers) to a common feature or target with some known parameters.
For example, two cameras can obtain depth (e.g., a stereo rig) by searching in the image gathered by one camera features found in the other one. Once these features have been matched and filtered, the remaining features can be projected on the world as light rays coming from these two cameras. The triangle formed between the feature in the space and the two cameras is the basis for triangulation.
The limitation of triangulation sensors is the need for an overlapping region of the emitter field of view and the receiver one (or the two cameras in the stereo rig case) [17]. Besides, nearby features have a larger parallax, i.e., image disparity, than more distant ones, and as a consequence, the triangulation-based devices have a better z resolution for closer distances than for farther ones. Likewise, the bigger the separation of the cameras (baseline), the better is their z resolution.
Different techniques exist that compute 3D information by triangulation: structured light, laser stripe and photometric stereo (PhS) from active imaging, structure from motion and stereo vision from passive imaging and continuous wave laser line scanning (CW-LLS) from laser line scanning.

3.3. Modulation

While the time domain approach uses amplitude and time to discriminate multiple scattered, diffused photons, the frequency domain uses the differences in the amplitude and phase of a modulated signal to perform this task. The diffused photons that undergo many scattering events produce temporal spreading of the transmitted pulse. Only low frequency components are efficiently transmitted, whilst high frequency components are lost. This method has been reported in the literature both from airborne platforms and from underwater vehicles. They usually modulate the amplitude in frequencies in the order of GHz, thus requiring very sensitive sensors and accurate time scales. The receivers are usually photomultiplier tubes (PMT) or, more recently, photon counters made of avalanche photodiodes (APD). These sensors are generally triggered during a time window, and the incoming light is integrated. After the demodulation step, 3D information can be obtained from the phase difference.
It is known that coherent modulation/demodulation techniques at optical frequencies in underwater environments fall apart due to the high dispersion in the sea water path [6], as well as for the different absorption and scattering coefficients depending on the optical wavelength. Because there is a minimum for these coefficients in the blue-green region of the color spectra, amplitude modulation of the laser carrier of these wavelengths is the most used modulation technique in underwater reconstruction.

4. Sensors and Technologies

This section presents all of the sensors studied in this paper. At the end of each subsection, a table is presented indicating the accuracy and resolution values of the references listed, when available. Furthermore, if a value has been obtained from graphic plots, an approximate (≈) symbol has been used.

4.1. Sonar

The term sonar is an acronym for sound, navigation and ranging. There are two major kinds of sonars, active and passive.
Passive sonar systems usually have large sonic signature databases. A computer system uses these databases to identify classes of ships, actions (i.e., the speed of a ship or the type of weapon released) and even particular ships [35,36,37]. These sensors are evidently not used for 3D reconstructions; thus, they are discarded in this study.
Active sonars create a pulse of sound, often called a ping, and then listen for reflections of the pulse. The pulse may be at constant frequency or a chirp of changing frequency. If a chirp, the receiver correlates the frequency of the reflections to the known signal. In general, long-distance active sonars use lower frequencies (hundreds of kHz), whilst short-distance high-resolution sonars use high frequencies (a few MHz).
In the active sonar category, we can find three major types of sonars: multibeam sonar (MBS), single beam sonar (SBS) and side scan sonar (SSS). If the across track angle is wide, they are usually called imaging sonars (IS). Otherwise, they are commonly named profiling sonars, as they are mainly used to gather bathymetric data. Moreover, these sonars can be mechanically operated to perform a scan, towed or mounted on a vessel or underwater vehicle.
Sound propagates in water faster than in air, although its speed is also related to water temperature and salinity. One of the main advantages of sonar soundings is their long range, making them a feasible sensor to gather bathymetry data from a surface vessel, even for thousands of meters’ depth. At this distance, a resolution of tenths of meters per sounding is a good result, whilst if an AUV is sent to dive at an altitude of 40 m to perform a survey, a resolution of a couple of meters or less can be achieved.
One of the clearest examples of bathymetric data gathering is performed using MBS, as in [38]. This sensor can also be correlated to a color camera to obtain not only 3D, but also color information, as in [39], where its authors scan a pool using this method. However, in this case, its range is lowered to the visual available range.
MBS can also be mounted on pan and tilt systems to perform a complete 3D scan. They are usually deployed using a tripod or mounted on top of an ROV, requiring the ROV to remain static while the scan is done, like in [40].
A scanning SBS can carry out a 3D swath by rotating its head [41], as if it were a one-dimensional range sensor mounted on a pan and tilt head. The data retrieval is not as fast as with an MBS.
Profiling can also be done with SSS, which is normally towed or mounted in an AUV to perform a gridded survey. The SSS is mainly used on-board a constant speed vehicle describing straight transects. Even though SSS can be considered as a 2D imaging sonar, 3D information can be inferred from it, as depicted by Coiras et al. in [42].
Imaging sonars (IS) differ from MBS or SBS by a broadened beam angle (e.g., they capture a sonic image of the sea bottom instead of a thin profile). For instance, in [43], Brahim et al. use an imaging sonar with a field of view of 29 (azimuth) × 10.8 (elevation) to produce either 48 × 512 or 96 × 512 azimuth by-range images where each pixel contains the backscattered energy for all of the points in the scene located at the same distance with the same azimuth from the camera.
Other exotic systems have been researched, combining IS with conventional cameras to enhance the 3D output and to better correlate the sonar correspondences. In [44], Negahdaripour uses a stereo system formed by a camera and an imaging sonar. Correspondences between the two images are described in terms of conic sections. In [45], a forward looking sonar and a camera are used, and feature correspondences between the IS and the camera image are provided manually to perform reconstructions. Furthermore, in [46], an SfM approach from a set of images taken from an imaging sonar is used to recover 3D data.
The object shadows in a sonic image can also be used to recover 3D data, as in [47], where Aykin et al. are capable of reconstructing simple geometric forms on simple backgrounds. Its main requirement is that the shadow is distinguishable and that it lays on a known flat surface.
Beamforming (BF) is a technique aimed at estimating signals coming from a fixed steering direction, while attenuating those coming from other directions. When a scene is insonified by a coherent pulse, the signals representing the echoes backscattered from possible objects contain attenuated and degraded replicas of the transmitted pulse. It is a spatial filter that combines linearly temporal signals spatially sampled by a discrete antenna. This technique is used to build a range image from the backscattered echoes, associated point by point with another type of information representing the reliability (or confidence) of such an image. Modeling acoustic imaging systems with BF has also been reported by Murino in [48,49], where an IS of 128 × 128 pixels achieves a range resolution of ± 3 . 5 cm. One pulse of this sonar system covers a footprint of 3 . 2 × 3 . 2 m 2 .
In [50], Castellani et al. register multiple MBS range measurements using global registration (ICP) with an average error of 15 cm.
Kunz et al. [51] fuse acoustic and visual information from a single camera, so that the imagery can be texture-mapped onto the MBS bathymetry (binned at 5 cm from 3 m), obtaining three-dimensional and color information.
Table 1 shows a comparison of the 3D reconstruction techniques using sonar.
Table 1. Summary of sonar 3D reconstruction solutions.
Table 1. Summary of sonar 3D reconstruction solutions.
ReferencesSonar TypeScopeAccuracyResolution
Pathak [38]MBSRough map for path planning≈1 m2.5 cm
Rosenblum [52]MBSSmall object reconstruction-≈8 cm
Hurtos [39]MBS + CameraProjects images on 3D surfaces2.34 cm-
Guo [41]SBSSmall target 3D reconstruction2.62 cm-
Coiras [42]SSSSeabed elevation with UW pipe19 cm5.8 cm
Brahim [43]ISSparse scene geometry0.5 m-
Aykin [47]ISSmooth surfaces 3D reconstruction≈15 cm1 cm
Negahdaripour [44,45,46]IS + CameraAlternative to stereo systems≈5 cm-

4.2. Light Detection and Ranging

Airborne scanning light detection and ranging (LiDAR) is widely used as a mapping tool for coastal and near shore ocean surveys. Similar to LLS, but surveyed from an aircraft, a laser line is scanned throughout the landscape and the ocean. Depending on the laser wavelength, LiDAR is capable of recovering both the ocean surface and the sea bottom. In this particular case, a green 532-nm laser that penetrates the ocean water over 30 m [53] is used in combination with a red or infrared laser. Both lasers return the echo from the sea surface, but only one reaches the underwater domain.
LiDAR has been used for underwater target detection (UWTD), usually mines, as well as for coastal bathymetry [54,55]. It is normally surveyed at heights of hundreds of meters (Pellen et al. survey mostly uniformly at 300 m [53]) with a swath of 100 to 250 m with a typical resolution in the order of decimeters. In [53], a resolution of 0.7 m is achieved. Moreover, the LiDAR signal can be modulated, enhancing its range capabilities and rejecting underwater backscatter [56,57].
Although this paper focuses on underwater sensors, LiDAR has been briefly mentioned, as it is capable of reconstructing certain coastal regions from the air. In Table 2, two 3D reconstruction references using LiDAR are compared.
Table 2. Summary of LiDAR 3D reconstruction solutions.
Table 2. Summary of LiDAR 3D reconstruction solutions.
ReferencesClassWavelengthLiDAR ModelCombinationAccuracyResolution
Reineman [53]ToF905 nmRiegl LMS-Q240iCamera, GPS0.42 m0.5 m
Cadalli [54]ToF532 nmU.S. Navy prototypePMT + 64 × 64 CCD-≈10 m
Pellen [55]UWTD1532 nmND:YAG laserPMT--
Mullen [56,57]UWTD1532 nmND:YAG laserPMT + Microwave--
1 Underwater target detection. No 3D reconstruction.

4.3. Laser Line Scanning

To increase the resolution of the systems exposed above, laser combined with imaging devices can be used. Green lasers working at 532 nm are a common solution as a light source because of their good trade-off between price, availability and low absorption and scattering coefficients in seawater. At the reception side, photomultiplier tubes (PMT) or photon counters can be used, although many approaches also use photodiodes or cameras.
For a larger operational range, preventing the effects of light scattering in the water, some LLS systems send out narrow laser pulses that will be gathered by range gated receivers.
There are three main categories of LLS: continuous wave LLS (CW-LLS), pulse gated LLS (PG-LLS) and modulated LLS. In Table 3, the reader can find a summary of the different LLS 3D reconstruction solutions. In addition to reconstruction, LLS are also used for long-range imaging (from 7 m). Some additional references are listed in Table 3, as well.
Table 3. Summary of laser line scanning 3D reconstruction solutions.
Table 3. Summary of laser line scanning 3D reconstruction solutions.
ReferencesAimTypeWavelengthReceiverAccuracyResolution
Moore [58]3DCW-LLS532 nmLinescan CCD-1 mm
Moore [59]3DCW-LLS532 nmLinescan CCD-3 mm
McLeod [60]3DPG-LLS--7 mm1 mm
Cochenour [61]3DMod-LLS532 nmPMT--
Rumbaugh [62]3DMod-LLS532 nmAPD4.5 cm1 cm
Dominicis [63]3DMod-LLS405 nmPMT5 mm1 mm
Dalgleish [64]Img.1CW-LLS532 nmPMT--
Dalgleish [64]Img.1PG-LLS532 nmPMT--
Gordon [65]Img.1PG-LLS488-514.5 nmPMT--
Mullen [66]Img.1Mod-LLS532 nmPMT--
1 The technique is aimed at extended range imaging.

4.3.1. Continuous Wave LLS

This subcategory uses a triangulation method to recover the depth. A camera-based triangulation device using a laser scan concept can be built using a moving laser pointer made of a mirror galvanometer and a line-scan camera, as shown in [58,59].
The geometric relationship between the camera, the laser scanner and the illuminated target spot is shown in Figure 3. The depth D of a target can be calculated from Equation (4).
Figure 3. Triangulation geometry principle for a laser scanning system.
Figure 3. Triangulation geometry principle for a laser scanning system.
Sensors 15 29864 g003
D = L 1 cos ( ω )
as:
L 1 = S cos ( θ ) sin ( θ ω )
since:
sin ( θ ω ) = O L 1 , and O = S cos ( θ )
therefore:
D = S tan ( θ ) tan ( ω )
where S is the separation (e.g., baseline) between the center of the scanning mirror and the center of the primary receiving lens of the camera (e.g., the center of perspective). Here, θ and ω are the scanning and camera pixel viewing angles, respectively.
The angles ω 0 and θ 0 are the offset mounting angles of the scanner and camera, and θ s and ω c are the laser beam angle known from a galvanometer or an encoder and the pixel viewing angle (with respect to the camera housing). Thus,
θ = θ 0 + θ s
ω = ω 0 + ω c
and:
D = S tan ( θ 0 + θ s ) tan ( ω 0 + ω c )
Both θ 0 and ω 0 have to be computed by calibration, so that afterwards, the distance to the target can be computed.

4.3.2. Pulse Gated LLS

This ToF sensor has a simple principle: it illuminates a narrow area with a laser light pulse while keeping the receivers shutter closed. Then, it waits for the return of the light from the object by estimating its distance from the sensor and then opens the shutter so that only the light returning from the target is captured. For instance, in Figure 4, the shutter should have been opened from 80 to 120 ns to get rid of the unwanted backscatter.
Figure 4. Representative normalized returning signal from an LLS. At higher turbidity (dashed gray line), the backscatter peak is stronger and the target return is weaker. The common volume backscatter is light that has been deflected once, whilst the multiple backscatter has been deflected twice or more times.
Figure 4. Representative normalized returning signal from an LLS. At higher turbidity (dashed gray line), the backscatter peak is stronger and the target return is weaker. The common volume backscatter is light that has been deflected once, whilst the multiple backscatter has been deflected twice or more times.
Sensors 15 29864 g004
This setup has been highly used in extended range imagery. In the early 1990s, the LLS system in [65] was used on the USS Dolphin research submarine and as a towed body to perform high resolution imagery at an extended range. This prototype used an argon ion gas laser, with a high power budget not available for most unmanned vehicles (ROVs or AUVs).
Dalgleish et al. [64] compared PG-LLS with CW-LLS as imaging systems. The experimental results demonstrate that the PG imager improved contrast and SNR (signal-to-noise ratio). Their sensor becomes limited by forward backscatter at seven attenuation lengths, whilst CW at six.
In true ToF 3D reconstruction, McLeod et al. [60] published a paper about a commercial sensor [67] mounted on the Marlin AUV. Their setup achieves an accuracy of 7 mm in a good visibility scenario, when measuring a point at 30 m.

4.3.3. Modulated LLS

A modulated LLS characterizes the use of the frequency domain, instead of the spatial or time domain, to discern a change in the sent signal. In sonar chirps (radar as well), the modulation and posterior de-modulation of the signal give insight into the distance from the sensor to the target.
As stated before, amplitude modulation is the only realizable modulation in underwater scenarios. The original and the returned signal are subtracted, and the distance is obtained by demodulation of the remainder.
The same approach can be used for extended range imaging, as well, as seen in [66], where Mullen et al. have developed a modulated LLS that uses frequency modulation in the laser source in order to identify the distance at which the target has been illuminated. The optical modulation is used to discriminate scattered light. Different frequencies are compared experimentally, finding that a high frequency (90 MHz) reaches further than a lower one (50 MHz or 10 MHz). The setup used by the authors can be seen in Figure 5.
Figure 5. Laser line scanning setup including a modulated optical transmitter, an optical receiver and signal analyzer and a water tank facility. The interaction length is the distance over which the transmitted beam and the receiver field of view overlap. Reproduced from [66].
Figure 5. Laser line scanning setup including a modulated optical transmitter, an optical receiver and signal analyzer and a water tank facility. The interaction length is the distance over which the transmitted beam and the receiver field of view overlap. Reproduced from [66].
Sensors 15 29864 g005
In [61], different modulation techniques based on ST-MP (single-tone modulated pulse) and PN-MP (pseudorandom coded modulated pulse) are compared for one-dimensional ranging. The results show that in clear water, the PN-MP stands as an improvement over the ST-MP due to the excellent correlation properties of pseudorandom codes.
In [62], a one-axis ranging solution is proposed. Although the authors characterize the solution as LiDAR, their setup is more similar to LLS, and the measurements are not taken from a plane. In the paper, a resolution of 1 cm from a distance of 60 cm is reported. This system could then be swept for a 3D reconstruction and work as a true LLS.
In [63], a simpler approach using an amplitude modulated blue laser (405 nm) at 80 MHz was used, called the MODEM-based 3D laser scanning system, that can reconstruct objects 8 . 5 meters away within a 5 % of error. The system is similar to those described before, but this study focuses on the 3D reconstruction of the object, showing the potential of this technique for long-range underwater reconstruction.

4.4. Structured Light

These systems consist of a camera and a color (or white light) projector. The triangulation principle is used between these two elements and the projected object.
The projector casts a known pattern on the scene, normally a set of light planes, as shown in Figure 6, where both the planes and the camera rays are known. The intersection between them is unknown and can be calculated as follows.
Figure 6. Triangulation geometry principle for a structured light system.
Figure 6. Triangulation geometry principle for a structured light system.
Sensors 15 29864 g006
Mathematically, a line can be represented in parametric form as:
r ( t ) = x = u c x f x t y = v c y f y t z = t
where ( f x , f y ) is the camera focal length in the x and y axes, ( c x , c y ) is the central pixel in the image and ( u , v ) is one of the detected pixels in the image. Supposing a calibrated camera and the origin in the camera frame, the light plane can be represented as in Equation (12).
π n : A x + B y + C z + D = 0
To find the intersection point, Equation (11) is substituted into Equation (12), giving Equation (13).
t = D A u c x f x + B v c y f y + C
Different patterns have been used in the literature [68], even though it is a fact that binary patterns are the most used ones, because they are easy to achieve with a projector and simple to process. Binary patterns use only two states of light stripes in the scene, usually white light. At the beginning, there is only one division (black-to-white) in the pattern. In the following pattern projections, a subdivision of the previous pattern is projected until the software cannot segment two consecutive stripes. The correspondence of consecutive light planes is solved using time multiplexing. The number of light planes achievable with this method is fixed, normally to the resolution of the projector.
Time multiplexing methods are based on the codeword created by the successive projection of patterns onto the object surface (see Figure 7). Therefore, the codeword associated to a position in the image is not completely formed until all patterns have been projected. Usually, the first projected pattern corresponds to the most significant bit, following a coarse-to-fine paradigm. Accuracy directly depends on the number of projections, as every pattern introduces finer resolution in the image. In addition, the codeword basis tends to be small, providing resistance against noise [68].
Figure 7. Binary structured light patterns. The codeword of a point p is created by the successive projection of patterns.
Figure 7. Binary structured light patterns. The codeword of a point p is created by the successive projection of patterns.
Sensors 15 29864 g007
On the other hand, phase shifting patterns use sinusoidal projections in the same operating mode to cover wider values in gray scale. By unraveling the phase value, different light planes can be obtained for just one state in the equivalent binary pattern. Phase shifting patterns are also time multiplexing patterns. Frequency multiplexing methods provide dense reconstruction for moving scenarios, but present high sensitivity to the non-linearities of the camera, reducing the accuracy and sensitivity to details on the surface of the target.
These methods use more than one projection pattern to obtain range information. De Bruijn sequences can achieve one-shot reconstructions by using pseudo-random sequences formed by alphabets of symbols in a circular string. If this theory is brought to matrices instead of vectors (e.g., strings), then those patterns are called M-arrays. These can be constructed by following a pseudo-random sequence [69]. Usually, these patterns use color to better distinguish the symbols in the alphabet. However, not all kinds of surface finishes and colors reflect correctly the incoming color spectra back to the camera [70,71]. One-shot coded patterns have also been used in air. However, to the best knowledge of the authors, there are no reports of these codification strategies in underwater scenarios.
In the literature, Zhang et al. project a grey scale four-step sinusoidal fringe [72]. Therefore, the pattern is a time multiplexing method using four different patterns. In their article, SL is compared to SV showing better behavior in SL on textureless objects. Similar results were obtained by Törnblom, projecting 20 different grey coded patterns in a pool [73]. An accuracy in the z direction of 2% was achieved with this system.
Bruno et al. [70] also project gray coded patterns with a final code shift of four pixel-wide bands. With these last shifts, better accuracy can be obtained compared to narrowing the pattern to only one pixel-wide patterns, where finding all of the thin black and white lines is more difficult. In this setup, a total of 48 patterns were used. However, this particular setup calculates the 3D points using the positions of two cameras determined during the calibration phase. The projector is used to illuminate the scene, whilst depth is obtained from the stereo rig. Thus, no lens calibration is needed for the projector, and any commercially-available projector can be used without compromising the accuracy of the measurements. This system would be a hybrid between SL and SV.
Another way to triangulate information using structured light is to sweep a light plane. This light plane can be swept either using the available pixels in the projector or by moving the projector. Narasimhan and Nayar [74] sweep a light plane into a tank with diluted milk and recover 3D information even in high turbidity scenarios where it is impossible to see anything but backscattering when using conventional floodlights. By narrowing the illuminated area to a light plane, the shapes of the objects in the distance can be picked out and therefore triangulated.
The use of infrared projectors, such as Kinect, has also been tested underwater [75]. The attempt confirmed that the absorption of the infrared spectrum is too strong to reach distances greater than a few centimeters.

Laser-Based Structured Light Systems

The systems presented in this section project laser light into the environment. Laser stripe (LS) systems are a subgroup of laser-based structured light systems (LbSLS), where although the pattern is fixed to be a line (a laser plane), the projector is swept across the field of view of the camera. Thus, for this setting, a motorized element is needed in addition to the laser if the system holding the camera and the laser is not moving. The relative position and orientation of the laser and camera system must be known in order to perform the triangulation process. The resolution of these systems is usually higher than stereoscopy, but they are still limited by absorption and scattering. The range of LS does not normally go over 3 m in clear waters [76], as will be seen later in the commercial solutions.
According to Bodenmann [77], the attenuation of light is significantly more pronounced in water than in air or in space, and so in order to obtain underwater images in color, it is typically necessary to be within 2 to 3 m of the seafloor or the object of interest. Moreover, these are some of the reported ranges for LS: 3 m for Inglis [76], 250 mm for Jakas [78] and 2 m for Roman [79].
Using an underwater stripe scanning system was initially proposed by Jaffe and Dunn in [80] to reduce backscattering. Tetlow and Spours [81] show in their article a laser stripe system with an automatic threshold setup for the camera, making this sensor robust to pixel saturation if the laser reflection is too strong. To that end, they programmed a table with the calibrated thickness of the laser stripe depending on the distance to the target, achieving resolutions of up to five millimeters at a distance of three meters.
Kondo et al. [82] tested an LS system in the Tri-Dog I AUV. Apart from using it for 3D reconstruction, they also track the image in real time to govern the robot. To keep a safe distance from the seabed, they center the laser line in the camera image by changing the depth of the vehicle. They report a resolution of 40 mm at three meters.
Hildebrandt et al. [83] mount a laser line onto a servomotor that can be rotated 45 with an accuracy of 0 . 15 . The camera is a 640 × 480 CMOS shooting at 200 frames per second (fps) with a 90 HFOV (horizontal field of view). The system returns 300k points in 2 . 4 seconds. Calibration is made in his article with a novel rig consisting of a standard checkerboard next to a grey surface on one side. The laser is better detected on a grey surface. On a white surface, light is strongly reflected, and the camera has to compensate for the vast amount of light by shortening the exposure time. The detection of the laser in the same plane of the calibration pattern is used to calculate the position of the laser sheet projector with respect to the camera.
In [84], a system consisting of a camera, a laser line and an LED light are mounted on the AUV Tuna Sand to gather 3D information, as well as imagery. The laser is pointed at the upper part of the image, whilst the lighting is illuminating the lower part. Therefore, there is enough contrast to detect the laser line. In [77,85,86], a similar system, called SeaXerocks (3D mapping device), is mounted on the ROV Hyper-Dolphin. With this system, the authors perform 3D reconstructions in real intervention scenarios, such as in hydrothermal sites and shipwrecks.
In [87], the Tuna Sand AUV is used with a different sensor. In this case, a camera and a motorized laser stripe are mounted in two independent watertight housings. By keeping the robot as static as possible, the laser is projected onto the scene whilst rotating it. Then, the camera captures the line deformation, from which the 3D information is recovered. In this paper, multiple laser scans from sea experiments at Kagoshima Bay are combined using the iterative closest point (ICP) algorithm. The reconstructed chimney is three meters tall at a 200-meter depth.
In [63,78], Jakas and Dominicis use a dual laser scanner to increase the field of view of a single laser stripe. The reported horizontal field of view is 180 . The system is very similar to the commercial sensor in [88]. They approximate the detected laser lines to be Gaussian and explain an optimization method to calibrate the camera-to-laser transformation. The authors claim that the achieved measuring error is below 4%.
Prats et al. [89,90,91] mount a camera fixed to the AUV Girona 500 frame and a laser stripe on an underwater manipulator carried by the vehicle. The stripe sweeps the scene by means of the robot arm, and the resulting point cloud is used to determine the target grasping points. The sea bottom is tracked to estimate the robot motion during the scanning process, so small misalignments between the data can be compensated.
Different approaches to the common laser stripe scanning have also been reported. In [92], two almost-parallel laser stripes are projected to compute the distance between these lines captured from a camera, to know the distance to the target. These values are used as an underwater rangefinder. However, 3D reconstruction was not the aim of the research.
In [93], Caccia mounts four laser pointers lined with a camera in an ROV. The four imaged pointers are used to calculate the altitude and the heading of the vehicle, assuming the seabed is flat.
Yang et al. mount a camera and a vertical laser stripe in a translation stage [94]. They recover 3D data interpolating from a data table previously acquired from calibration. Whenever a laser pixel is detected in the image, its depth value is calculated from the four closest points in the calibration data.
Massot and Oliver [95,96,97], designed a laser-based structured light system that enhances simpler laser stripe approaches by using a diffractive optical element (DOE) to enhance a simple laser pointer, shaping the beam into 25 parallel lines, called a laser-based structured light (LbSL) system. The pattern is projected on the environment and recovered by a color camera. In one camera shot, this solution is capable of recovering sparse 3D information, as seen in Figure 8, whilst with two or more shots, denser information can be obtained. The system is targeted at underwater autonomous manipulation stages where a high density point cloud of a small area is needed, and during the manipulation, a one-shot and fast reconstruction aids the intervention.
In Table 4, the different SL references are compared. For the solutions with no clear results, the resolution has been deduced from the graphics in their respective articles.
Table 4. Summary of structured light 3D reconstruction solutions.
Table 4. Summary of structured light 3D reconstruction solutions.
ReferencesTypeColor/WavelengthPatternAccuracyResolution
Zhang [72]SLGrayscaleSinusoidal Fringe≈1 mm-
Tornblom [73]SLWhiteBinary pattern4 mm0.22 mm
Bruno [70]SLWhiteBinary pattern0.4 mm0.3 mm
Narasimhan [74]SLWhiteLight plane sweep9.6 mm-
Bodenmann [84,85]LS532 nmLaser line--
Yang [94]LS532 nmLaser line-
Kondo [82]LS532 nmLaser line-≈1 cm
Tetlow [81]Mot. LS532 nmLaser line1 cm5 mm
Hildebrandt [83]Mot. LS532 nmLaser line--
Prats [89]Mot. LS532 nmLaser line≈1 cm-
Nakatani [87]Mot. LS532 nmLaser line≈1 cm-
Jakas [63,78]Dual LS405 nmLaser lineSee [88]≈1 cm
Massot [96]LbSL532 nm25 laser lines3.5 mm-
Figure 8. 3D reconstruction of a 1-kg plate using LbSLS from [95].
Figure 8. 3D reconstruction of a 1-kg plate using LbSLS from [95].
Sensors 15 29864 g008

4.5. Photometric Stereo

In situations where light stripe scanning takes too long to be practical, photometric stereo provides an attractive alternative. This technique for scene reconstruction requires a small number of images captured under different lighting conditions. In Figure 9, there is a representation of a typical PhS setup with four lights.
3D information can be obtained by changing the location of the light source whilst keeping the camera and the object in a fixed position. Narasimhan and Nayar present a novel method to recover albedo, normals and depth maps from scattering media [74]. Usually, this method requires a minimum of five images. In special conditions, such as the ones presented in [74], four different light conditions can be enough.
In [98], Tsiotsios et al. show that three lights are enough to compute tridimensional information. They also compensate the backscatter component by fitting a backscatter model for each pixel.
Like in time multiplexing SL techniques, PhS also suffers from long acquisition times; hence, these techniques are not suitable for moving objects. However, the cited references report them to be effective in clear waters for close range static objects.
Figure 9. Photometric stereo setup: four lights are used to illuminate an underwater scene. The same scene with lighting from different sources results in the images used to recover three-dimensional information [99].
Figure 9. Photometric stereo setup: four lights are used to illuminate an underwater scene. The same scene with lighting from different sources results in the images used to recover three-dimensional information [99].
Sensors 15 29864 g009

4.6. Structure from Motion

SfM is a triangulation method that consists of taking images of an object or scene using a monocular camera. From these camera shots, image features are detected and matched between consecutive frames to know the relative camera motion and, thus, its 3D trajectory.
First, suppose a calibrated camera, where the principal point and calibration are known, as well as lens distortion and refractive elements to ensure an accurate 3D result.
Given m images of n fixed 3D points, then m projection matrices P i and n 3D points X j from the m · n correspondences x i j are to be estimated.
x i j = P i X j , i = 1 , , m , j = 1 , , n
Therefore, if the entire scene is scaled by some factor k and, at the same time, the projection matrices by a factor of 1 / k , the projection of the scene points remain the same. Thus, only with SfM, the scale is not available, although there are methods that compute it from known objects or by knowing the constraints of the robot carrying the camera.
x = P X = 1 k P ( k X )
The one-parameter family of solutions parametrized by λ is:
X ( λ ) = P + x + λ c
where P + is the pseudo-inverse of P (i.e., P P + = I ) and c is its null-vector, namely the camera center, defined by P c = 0 .
The approach of SfM is the least expensive in terms of hardware and the easiest to install in a real robot. Only a still camera or a video recorder is needed, with enough storage to keep a full dive in memory. Later, the images can be processed to obtain the required 3D models.
In the underwater medium, both feature detection and matching suffer from diffusion, non-uniform light and, eventually, sun flickering, making the detection of the same feature more difficult from different viewpoints. Depending on the distance from the camera to the 3D point, the absorption and scattering components vary, changing the colors and the sharpness of that particular feature in the image. More difficulties arise if images are taken from the air to the ocean [100].
Sedlazeck et al. show in [101] a real 3D scenario reconstructed from the ROV Kiel 6000 using an HD color camera. Features are selected using a corner detector based on image gradients. Later, the RANSAC [102] procedure is used to filter outliers after the features have been matched.
Pizarro et al. [103] use the SeaBED AUV to perform optical surveys, equipped with a 1280 × 1024 px CCD camera. The feature detector used is a modified Harris corner detector, and its descriptor is a generalized color moment.
In [104], Meline et al. compare Harris and SIFT features using a 1280 × 720 px camera in shallow water. In the article, the authors reconstruct a statue bust. They conclude that SIFT is not robust to speckle noise, contrary to Harris. Furthermore, Harris presented a better inlier count in the different scenarios.
McKinnon et al. [105] use GPU SURF features and a high resolution camera of 2272 × 1704 px to reconstruct a piece of coral. This setup presents several challenges in terms of occlusions of the different views. With their SfM approach, they achieve 0 . 7 mm accuracy at 1 to 1 . 5 m.
Jordt-Sedlazeck and Koch develop a novel refractive structure from motion algorithm that takes into account the refraction of glass ports in water [106]. By considering the refraction coefficient between the air-glass-water interface, their so-called refractive SfM improves the results of generic SfM.
Cocito et al. [107] use images captured by divers that always contain a scaling cube to recover scaled 3D data. The processing pipeline requires an operator to outline silhouettes of the area of interest of the images. In the case of the application in that paper, they were measuring bryozoan colonies’ volume.
In [108], the documentation of an archaeological site where experimental cleaning operations were conducted is shown. A commercial software, Photoscan by Agisoft, was used to perform a multi-view 3D reconstruction.
Nicosevici et al. [109] use SIFT features in a robotics approach, with an average error of 11 mm.
Ozog et al. [110] reconstruct a ship hull from an underwater camera that also acts as a periscope when the vehicle navigates on surface. Using SLAM and a particle filter, they achieve faster execution times (compared to FabMap). The error distribution achieved has a mean of 1 . 31 m and a standard deviation of 1 . 38 m. However, using planar constraints, they reduced the mean and standard deviation to 0 . 45 and 0 . 19 m, respectively.
The solutions presented are summarized in Table 5. Known reference distances must be visible in the images to recover the correct scale. In the solutions where a result is given, the authors have manually scaled the resulting point cloud to match a particular feature or human-made object.
Table 5. Summary of structure from motion 3D reconstruction solutions.
Table 5. Summary of structure from motion 3D reconstruction solutions.
ReferencesFeatureMatching MethodAccuracyResolution
Sedlazeck [101]CornerKTL Tracker--
Pizarro [103]HarrisAffine invariant region3.6 cm-
Meline [104]HarrisSIFT--
McKinnon [105]SURFSURF0.7 mm-
Jordt-Sedlazeck [106]-KLT Tracker--
Cocito [107]SilhouettesManually≈1 cm-
Bruno [108]SIFTSIFT4.5 mm-
Nicosevici [109]SIFTSIFT11 mm-
Ozog [110]SIFTSIFT0.45 m-

4.7. Stereo Vision

Stereoscopy follows the same working principle as SfM, but features are matched between left and right frames of a stereo camera to compute 3D correspondences. Once a stereo rig is calibrated, the relative position of one camera with respect the other is known, and therefore, the scale ambiguity is solved.
The earliest stereo matching algorithms were developed in the field of photogrammetry for automatically constructing topographic elevation maps from overlapping aerial images. In computer vision, the topic of stereo matching has been widely studied [111,112,113,114,115], and it is still one of the most active research areas.
Suppose two cameras C L and C R and two similar features F L and F R in each camera image. To compute the 3D coordinates of the feature F, whose projection in C L is F L and in C R is F R , we trace a line L L that crosses C L focal point and F L and another line L R that crosses C R focal point and F R . If both cameras’ calibration are perfect, F = L L L R . However, as camera calibration is usually solved by least squares, the solution is not always perfect. Therefore, the approximate solution is taken as the closest point between L L and L R [116].
By knowing the relative position of the cameras and the location of the same feature in both images, the 3D coordinates of the feature in the world can be computed by triangulation. In Figure 10, the corresponding 3D point of the image coordinates x = ( u L , v L ) and x = ( u R , v R ) is the point p = ( x W , y W , z W ) , which can also be written as x F x = 0 where F is the fundamental matrix [116].
Once the camera rig is calibrated (known baseline, relative pose of the cameras and no distortion in the images), 3D imaging can be obtained calculating the disparity for each pixel, e.g., perform a 1D search for each pixel in the left and right images, where block matching is normally used. The disparity is the difference in pixels from the left to the right image, where the same patch has been found; so, the depth z is given by:
z = f · b d
where d is the disparity in pixels, f is the focal distance in pixels, b is the baseline in meters and z is the depth or distance of the pixel perpendicular to the image plane, in meters.
Once these 3D data have been gathered, the registration between consecutive frames can be done using 2D or 3D features or even 3D registration methods, such as ICP.
Fairly different feature descriptors and matchers have been used in the literature. SIFT [117,118,119,120,121,122] is one of the most used, as well as SURF [123], or even direct 3D registration with SIFT 3D [118] or ICP [117]. For instance in [124], Servos et al. perform refractive projection correction on depth images generated from a Bumblebee2 camera (12-cm baseline). The results obtained with this correction have better accuracy and more pixel correspondences, compared to standard methods. The registration is directly done in the generated point cloud using ICP.
Schmidt et al. [120] use commercial GoPro cameras to set a 35-mm baseline stereo rig and perform micro bathymetry using SIFT features. They achieve a resolution of 3 mm in their reconstructions.
Figure 10. Triangulation geometry principle for a stereo system.
Figure 10. Triangulation geometry principle for a stereo system.
Sensors 15 29864 g010
In [122], the stereo system IRIS is hung from the tip of the arm of the Victor6000 ROV. The system uses SIFT combined with RANSAC to discard outliers. After that, a sparse bundle adjustment is performed to correct the navigation to survey natural underwater objects.
In [125], Hogue et al. combine a Bumblebee stereo and a inertial unit housed in a watertight case, called Aquasensor. This system is used to reconstruct and register dense stereo scenes. The reconstruction shows high drift if the IMU is not used; thus, an erroneous camera model is assumed to be the cause of this inaccuracy. The system is used by the authors to perform a reconstruction of a sunken barge.
Beall et al. [123] use a wide baseline stereo rig and extract SURF features from left and right image pairs. They track these features to recover the structure of the environment after a SAM (smoothing and mapping) step. Then, the 3D points are triangulated using Delaunay triangulation, and the image texture is mapped to the mesh. This setup is applied to reconstruct coral reefs in the Bahamas.
Negre et al. [126,127] perform 3D reconstruction of underwater environments using a graph SLAM approach in a micro AUV equipped with two stereo rigs. In Figure 11, a 3D reconstruction of Santa Ponça Bay is displayed, covering an area of 25 × 10 m.
Johnson-Roberson et al. [128] studied the generation and visualization of large-scale reconstructions using stereo cameras. In their manuscript, image blending techniques and mesh generation are discussed to improve visualization by reducing the complexity of the scene in proportion to the viewing distance or relative size in screen space.
Fused stereoscopy and MBS have been reported in [129]. There, Galceran et al. provide a simultaneous reconstruction of the frontal stereo camera and the downwards-looking MBS.
Another example of this set of sensors is shown by Gonzalez-Rivero [130], where its output is used to monitor a coral reef ecosystem and to classify the different types of corals.
Figure 11. 3D reconstruction from SV using graph SLAM ( 25 × 10 m, Mallorca) [127,131].
Figure 11. 3D reconstruction from SV using graph SLAM ( 25 × 10 m, Mallorca) [127,131].
Sensors 15 29864 g011
Nurtantio et al. [119] use three cameras and extract SIFT features. The reconstruction of the multi-view system is triangulated using Delaunay triangulation. However, they manually preprocess the images to select whether they are suitable for an accurate reconstruction. The outlier removal stage is also manual.
Inglis and Roman constrain stereo correspondences using multibeam sonar [132]. From the Hercules ROV, navigation data, multibeam and stereo are preprocessed to reduce the error, and then, the sonar and optical data are mapped into a common coordinate system. They back project the range data coming from the sonar to the camera image and limit the available z correspondence range for the algorithm. To simplify this approach, they tile the sonar back projections into the image and generate tiled minimum and maximum disparity values for an image region (e.g., a tile). The number of inliers obtained with this setup increases significantly compared to an unconstrained system.
In Table 6, the different solutions are presented and compared.
Table 6. Summary of stereoscopy 3D reconstruction solutions.
Table 6. Summary of stereoscopy 3D reconstruction solutions.
ReferencesFeatureMatching MethodBaselineAccuracyResolution
Kumar [117]SIFTRANSAC and ICP---
Jasiobedzki [118]SIFTSIFT3D and SLAM---
Nurtantio [119]SIFTSIFT8 and 16 cm--
Schmidt [120]SIFTSIFT35 mm-3 mm
Brandou [122]SIFTSIFT---
Beall [123]SURFSURF and SAM60 cm--
Servos [124]-ICP12 cm26.4 cm-
Hogue [125]CornersKLT tracker12 cm2 cm-
Inglis [132]SIFTSIFT42.5 cm--

4.8. Underwater Photogrammetry

It is commonly accepted that photogrammetry is defined as the science or art of obtaining reliable measurements by means of photographs [133]. Therefore, any practical 3D reconstruction method that uses photographs (e.g., imaging-based methods) to obtain measurements are photogrammetric methods. Photogrammetry comprises methods of image measurement and interpretation often shared with other scientific areas in order to derive the shape and location of an object or target from a set of photographs. Hence, techniques such as structure from motion and stereo vision belong to both photogrammetric and computer vision communities.
In photogrammetry, it is common to set up a camera in a large field looking at distant calibration targets whose exact location has been precomputed using surveying equipment. There are different categories for photogrammetric applications depending on the camera position and object distance. For example, aerial photogrammetry is normally surveyed at a height of 300 m [134].
On the other hand, close-range photogrammetry applies to objects ranging from 0 . 5 to 200 m in size, with accuracies under 0 . 1 mm and around 1 cm at each end. In a close-range setup, the cameras observe a specific volume where the object or area to reconstruct is totally or partially in view and has been covered with calibration targets. The location of these targets can be known as before or calculated after the images have been captured if their shape and dimensions are known [134].
Image quality is a very important topic in photogrammetry. One of the main important fields of this community is camera calibration, a topic that has already been introduced in Section 2.1. If absolute metric accuracy is required, it is imperative to pre-calibrate the cameras using one of the techniques previously mentioned and to use ground control points to pin down the reconstruction. This is particularly true for classic photogrammetry applications, where the reporting of precision is almost always considered mandatory [135].
Underwater reconstructions can also be referred to as underwater photogrammetric reconstructions when they have a scale or dimension associated with the objects or pixels of the scene (e.g., if the resulting 3D model is metric) and if the data were gathered using cameras.
According to Abdo et al. [136], an underwater photogrammetric system for obtaining accurate measurements of complex biological objects needs to: (1) be suitable for working in restrictive spaces; (2) allow one to investigate relatively large areas carried out on one or numerous organisms; (3) admit the acquisition of data easily, performed in situ and efficiently; and (4) provide a measurement process that is easy to perform, precise, accurate and accomplished in a reasonable time lapse.
The most accurate way to recover structure and motion [137] is to perform robust non-linear minimization of the measurement (re-projection) errors, which is commonly known in the photogrammetry communities as bundle adjustment [28]. Bundle adjustment is now the standard method of choice for most structure-from-motion problems and is commonly applied to problems with hundreds of weakly calibrated images and tens of thousands of points. In computer vision, it was first applied to the general structure from motion problem and then later specialized for panoramic image stitching [28].
Image stitching originated in the photogrammetry community, where more manually-intensive methods based on surveyed ground control points or manually registered tie points have long been used to register aerial photos into large-scale photo-mosaics [23]. The literature on image stitching dates back to work in the photogrammetry community in the 1970s [138,139].
Underwater photogrammetry can also be associated with other types of measures, such as the measure of biological organisms’ volumes with 3D reconstruction using an stereo pair [136], the sustainability of fishing stocks [140], examining spatial biodiversity, counting fish in aquaculture [141], continuous monitoring of sediment beds [142] or to map and understand seabed habitats [13,21].
Zhukovsky et al. [143] reconstruct an antique ship, similar to [144]. In [32], Menna et al. reconstruct the sunken vessel Costa Concordia using photogrammetric targets to reconstruct and assess the damaged hull.
Photogrammetry is also performed by fusing data from diverse sensors, such as in [145], where chemical sensors, a monocular camera and an MBS are fused in an archaeological investigation, and in [146], where a multimodal topographic model of Panarea Island is obtained using a LiDAR, an MBS and a monocular camera.
Planning a photogrammetric network with the aim of obtaining a highly-accurate 3D object reconstruction is considered as a challenging design problem in vision metrology [147]. The design of a photogrammetric network is the process of determining an imaging geometry that allows accurate 3D reconstruction. There are very few examples of the use of a static deployment of cameras working as underwater photogrammetric networks [148] because this type of approach is not readily adapted to such a dynamic and non-uniform environment [149].
In [150], de Jesus et al. show an application of photogrammetry for swimming movement analysis with four cameras, two underwater and two aerial. They use a calibration prism composed of 236 markers.
Leurs et al. [151] estimate the size of white sharks using a camera and two laser pointers, with an accuracy of ± 3 cm from a distance of 12 m.
Different configurations to monocular or stereo camera systems have also been reported. In [152], Brauer et al. use a stereo rig and a projector (SL). Using fringe projection, they achieve a measurement field of 200 × 250 mm and a resolution of 150 μm.
In [153], Ekkel et al. use a stereo laser profiler (four cameras, two for positioning with targets and two for laser triangulation) using a 640-nm laser. They report an accuracy of 0 . 05 mm in the object plane.

5. Commercial Solutions

There exist different commercial solutions for gathering 3D data or to help with calculating it. In Table 7, a selection of alternatives is shown.
Teledyne sells an underwater LLS called INSCAN [154]. This system must be deployed underwater or fixed to a structure. The device samples 1 m 2 in 5 s at a 5-m range.
SL1 is a similar device from 3D at Depth [67]. In fact, this company worked with Teledyne in this design [155], and the specifications of these two pieces of equipment are quite close.
3DLS is a triangulation sensor formed by an underwater dual laser projector and a camera. It is produced by Smart light devices and uses a 15-W green laser.
2G Robotics has three models of triangulation-based laser scanners fitting different ranges [156,157,158]. These are motorized solutions, so they must be deployed and static during their scan.
Savante provides three products. Cerberus [159] is a triangulation sensor formed by a laser pointer and a receiver, capable of recovering 3D information. SLV-50 [160] is another triangulation sensor formed by a laser stripe and a high sensitivity camera, and finally, Lumeneye [161] is a laser stripe that only casts laser light on the scene.
Tritech provides (similar to Savante) a green laser sheet projector called SeaStripe [162]. The 3D reconstruction must be performed by the end-user camera and software.
Table 7. Available commercial solutions to perform 3D reconstruction.
Table 7. Available commercial solutions to perform 3D reconstruction.
Commercial SolutionsRange (m)DepthResolutionField of viewMotorizedMethod
NameCompanyMinMax(m)(mm)(deg)
INSCAN [154]Teledyne CDL22530005 30 × 30 × 360 yesTOF
SL1 [67]3D at Depth23030004 30 × 30 × 360 yesTOF
3DLS [88]Smart Light Devices0.3240000.1--Triangulation
ULS-100 [156]2g Robotics0.113501 50 × 360 yesTriangulation
ULS-200 [157]2g Robotics0.252.53501 50 × 360 yesTriangulation
ULS-500 [158]2g Robotics11030003 50 × 360 yesTriangulation
Cerberus [159]Savante-106000---Triangulation
SLV-50 [160]Savante-2.56000160noTriangulation
Lumeneye [161]Savante--6500-65noLaser only
SeaStripe [162]Tritech--4000-64noLaser only

6. Conclusions and Prospects

The selection of a 3D sensing system to be used in underwater applications is non-trivial. Basic aspects that should be considered are: (1) the payload volume, weight and power available, in case the system is an on-board platform, (2) the measurement time, (3) the budget and (4) the expected quality of the data gathered. Regarding the quality, optical sensors are very sensitive to water turbidity and surface texture. Consequently, factors, such as the target dimensions, surface, shape or accessibility, may influence the choice and adaptiveness of the sensor to the reconstruction problem. Table 8 presents a comparison of the solutions surveyed in this article according to its typical operative range, resolution, ease of use, relative price and its suitability to be used on different platforms.
Underwater 3D mapping has been historically carried out by means of acoustic multibeam sensors. In that case, the information is normally gathered as an elevation map, and more recently, color and texture can be added afterwards from photo-mosaics, if available.
Color or texture information must be acquired using cameras operating at relatively short distances (<5 m, typically) and with a low cruise speed. In general, mono-propeller AUVs are not appropriate for optical imaging applications, because they cannot slow down their speed as required by the optical equipments. On the other hand, hovering vehicles are suitable for imaging-based sensors, as they can adjust their velocity to the sensors’ needs. In some particular cases, even divers can be a choice.
Optical mapping can also be accomplished with only SfM and, as industrial ROVs most often incorporate a video camera, it is feasible to record the needed images and reconstruct an entire scene (see Campos et al. [163], for example). However, these reconstructions lack a correct scale, and they are computationally demanding. If, instead, a stereo rig is used, SV techniques can be applied and can solve the scale problem.
According to Bruno, SV is the easiest way to obtain the depth of a submarine scene [70]. These passive sensors are widely used because of their low cost and simplicity. Similarly to SfM, SV needs textured scenes to achieve satisfactory result, giving rise to missing parts corresponding to untextured regions in the final reconstruction.
Table 8. Strengths and weaknesses of the sensors and techniques for 3D reconstruction.
Table 8. Strengths and weaknesses of the sensors and techniques for 3D reconstruction.
3D techniqueRangePlatformResolutionEase of assemblyPrice
MBS<11,000 mV1, T2, ROV, AUVLowIntermediateHigh
SBS < 6000 mV, ROV, AUVLowIntermediateHigh
SSS < 150 mT, AUVLowIntermediateHigh
IS < 150 mV, T, ROV, AUVLowIntermediateHigh
LiDAR<20 mAerialLow High
CW-LLS < 10 mROVIntermediateLowHigh
PG-LLS < 10 mROVIntermediateLowHigh
Mod. LLS < 10 mROVIntermediateLowHigh
SfM < 3 mROV, AUVIntermediateHighLow
SV < 3 mROV, AUVIntermediateIntermediateLow
PhS < 3 mROVIntermediateIntermediateLow
VW-SL < 3 mROV, AUVHighIntermediateIntermediate
CW-SL < 10 mROV, AUVHighIntermediateIntermediate
1 Vessel; 2 Towed.
To overcome the above-mentioned problems of SfM and SV and trying to increase the resulting resolution, SL uses light projection to cast features on the environment. These sensors are capable of working at short distances with high resolution, even for objects without texture. The drawback, compared to SV, is a slower acquisition time caused by the need to move the projection atop the scene or even to use different patterns. The acquisition time is a relevant problem that limits the use of SL systems in real conditions where the relative movement between the sensor and the scene can give rise to reconstruction errors.
In addition, acquiring data from dark objects using SL is, in general, strongly influenced by illumination and contrast conditions [70]. Shiny objects are also challenging for SL, because the reflected light may mislead the pattern decoder. Moreover, due to the large illuminated water volume, this technique is strongly affected by scattering, reducing its range.
To minimize absorption, as well as common volume scattering, LbSL systems take advantage of selected wavelength sources in the green-blue region of the spectrum, extending their capable range. For an improved reduction of the scattering effects, the receiver window can be narrowed as in LLS sensors; even more, the emitter and the receiver can also be pulse gated [64], even though this strategy can be limited by a contrast decline.
On the other hand, when a precise and closer look at an object or structure is needed, LLS technology is not always suitable, as it has a large minimum measuring distance.
Amongst optical solutions, laser-based sensors present a good trade-off between cost and accuracy, as well as an acceptable operational range. Accordingly, regarding the foreseeable future, more research on laser-based structured light and on laser line scanning underwater is needed. These new devices should be able to scan while the sensor is moving, just like MBS, so software development and enhanced drivers are also required.
Another challenge for the future is to develop imaging systems that can eliminate or reduce scattering while imaging. Solutions such as pulse gated cameras and laser emitters are effective [164], but still expensive.
Overall, it is quite clear that no single optical imaging system fits all of the 3D reconstruction needs, covering very different ranges and resolutions. Besides, it is important to point out the lack of systematic studies to compare, with as much precision as possible, the performance of different sensors on the same scenario and conditions. One of these studies is authored by Roman et al. [79], who compared laser-based SL to SV and MBS, mapping a small area of a real underwater scenario using an ROV. In that case, the stereo data showed less definition than the sonar and the SL. The comparison was made during a survey where laser images were collected at 3 Hz, at a speed of 2 to 5 cm/s from 3 m above the bottom, whilst stereo imagery was captured on a separate survey at 0.15 Hz at a speed of 15 cm/s and a distance of 1.5 to 3 m, giving a minimum overlap of 50 % . MBS was captured during the laser survey at 5 Hz. As seen in these numbers, a different data rate induces less or more spatial resolution. Nonetheless, Roman et al. concluded that SL offers a high resolution mapping capability, better than SV and MBS for close-range reconstructions, such as the investigation of archaeological sites.
Massot et al. in [96] provide a systematic analysis comparing SV and LbSL in a controlled environment. To that end, a robot arm is used to move the sensors describing a precise path, surveying a 3 × 2 m underwater scene created in a water tank containing different objects of known dimensions. Apart from other numerical details, the authors conclude that for survey missions, stereo data may be enough to recover the overall shape of the underwater environment, whenever there is enough texture and visibility. In contrast, when the mission is aimed at manipulation and precise measurements of reduced areas are needed, LbSL is a better option.
It would be advisable to work on similar approaches to the aforementioned for the near future, contributing to a better knowledge of each individual sensor behavior when used in diverse situations and applications and also to the progress in multisensor data integration methodologies.
Table 9. Strengths and weaknesses of the sensors and techniques for 3D reconstruction.
Table 9. Strengths and weaknesses of the sensors and techniques for 3D reconstruction.
TechnologyStrengthWeakness
MBSEarly adoptedHigh cost
Long range and coverageHigh minimum distance
Independent of water turbidityLow resolution
SBSEarly adoptedHigh cost
Long rangeEchoes
Independent of water turbidityLow resolution
SSSGood data acquisition rateHigh cost
Independent of water turbidityNeeds constant speed
Long rangeUnknown dimension
ISMedium to large rangeHigh cost
Independent of water turbidityUnknown dimension
LiDARNot underwaterLimited to first 15 meters
Safety constraint
LLSMedium data acquisition rateHigh cost
Medium rangeSafety constraint
Good performance in scattering waters
SfMSimple and inexpensiveComputation demanding
High accuracy on well-defined targetsSparse data covering
Close rangeNeeds textured scenes
Unknown scale
SVSimple and inexpensiveComputation demanding
High accuracy on well-defined targetsSparse data covering
Close rangeLow data acquisition rate
PhSSimple and inexpensiveLimited to smooth surfaces
Close rangeNeeds fixed position
VW-SLHigh data acquisition rateComputation demanding
Close rangeMissing data in occlusions and shadows
Needs fixed position
CW-SLHigh data acquisition rateComputation demanding
Medium rangeMissing data in occlusions and shadows
Safety constraint if laser source
Table 9 summarizes the main strengths and weaknesses of the solutions surveyed in this article. The comments in the table are quite general, and a number of exceptions may exist. Furthermore, these pros and cons may also be mitigated or increased depending on the application and/or the platform used.
With regard to the use of standard robots as data-gathering platforms, at present, scientists can mount their systems in the payload area, but in general, these systems are independent from the control architecture of the vehicle. As a consequence, the payload and robot work independently; thus, the generation and control of surveys for data sampling missions is still an issue. An adaptive data sampling mission should allow scientists to program the data density in a required area or volume. Then, the controlled robot would only proceed from one mission waypoint to the next only if the data sampling requirement were met. In this way, the resulting data would not lack spatial or temporal resolution. However, work class ROVs or commercial AUVs do not normally have this type of control interface available.
Finally, as was mentioned earlier, to overcome the limitations of each individual sensor type, advanced reconstruction systems can combine various sensors of the same or different natures. This solution can be suited to an underwater robot or to a fleet of them, as using several sensing modalities often requires different speeds and distances from the sea bottom. To make these solutions really functional, much more research effort has to be focused on underwater localization, so that data can be consistently registered and finally integrated in a unique framework.

Acknowledgments

This work has been partially supported by grant BES-2012-054352(FPI), contract DPI2014-57746-C3-2-R by the Spanish Ministry of Economy and Competitiveness, Illes Balears Local Government AAEE60/2014 and FEDER funding.

Author Contributions

Miquel Massot carried out a literature survey. Miquel Massot and Gabriel Oliver proposed the methods and techniques to be summarized and compared. Miquel Massot wrote the whole paper and Gabriel Oliver supervised the drafts and the bibliography. All authors revised and approved the final submission.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

APD: Avalanche photodiode
AUV: Autonomous underwater vehicle
BF: Beam forming
CCD: Charged-couped device
CMOS: Complementary metal-oxide semiconductor
CW-LLS: Continuous wave LLS
DOE: Diffractive optical element
fps; Frames per second
GPS: Global positioning system
GPU: Graphics processor unit
HD: High definition
ICP: Iterative closest point
IMU: Inertial measurement unit
IS: Imaging sonar
KLT: Kanade-Lucas-Tomasi feature tracker
LbSL: Laser-based SL
LED: Light-emitting diode
LiDAR: Light detection and ranging
LLS; Laser line scanning
LS: Laser stripe
MBS: Multibeam sonar
NA: Not available
PG-LLS: Pulse gated LLS
PMT: Photomultiplier tube
PN-MP: Pseudorandom coded modulation pulse
RANSAC: Random sample and consensus
ROV: Remoted operated vehicle
SAM: Smoothing and mapping
SBS: Single beam sonar
SIFT: Scale invariant feature transform
SL: Structured light
SNR: Signal to noise ratio
SONAR: Sound navigation and ranging
SSS: Sidescan sonar
ST-MP: Single tone modulated pulse[SURF] Speeded up robust features
SV: Stereo vision
TOF: Time of flight
UW: Underwater
UWTD: Underwater target detection

References

  1. Blais, F. Review of 20 years of range sensor development. J. Electron. Imaging 2004, 13, 231–243. [Google Scholar] [CrossRef]
  2. Malamas, E.N.; Petrakis, E.G.; Zervakis, M.; Petit, L.; Legat, J.D. A survey on industrial vision systems, applications and tools. Image Vis. Comput. 2003, 21, 171–188. [Google Scholar] [CrossRef]
  3. Drap, P. Underwater Photogrammetry for Archaeology. In Special Applications of Photogrammetry; InTech: Rijeka, Croatia, 2012; pp. 111–136. [Google Scholar]
  4. Jaffe, J.S.; Moore, K.; McLean, J.; Strand, M. Underwater Optical Imaging: Status and Prospects. Oceanography 2001, 14, 64–75. [Google Scholar] [CrossRef]
  5. Foley, B.; Mindell, D. Precision Survey and Archaeological Methodology in Deep Water. MTS J. 2002, 36, 13–20. [Google Scholar]
  6. Kocak, D.M.; Dalgleish, F.R.; Caimi, F.M. A focus on recent developments and trends in underwater imaging. Mar. Technol. Soc. J. 2008, 42, 52–67. [Google Scholar] [CrossRef]
  7. Kocak, D.M.; Caimi, F.M. The Current Art of Underwater Imaging With a Glimpse of the Past and Vision of the Future. Mar. Technol. Soc. J. 2005, 39, 5–26. [Google Scholar] [CrossRef]
  8. Caimi, F.M.; Kocak, D.M.; Dalgleish, F.; Watson, J. Underwater imaging and optics: Recent advances. In Proceedings of the MTS/IEEE Oceans, Quebec City, QC, Canada, 15–18 September 2008; pp. 1–9.
  9. Bonin-Font, F.; Burguera, A. Imaging systems for advanced underwater vehicles. J. Marit. Res. 2011, 8, 65–86. [Google Scholar]
  10. Bianco, G.; Gallo, A.; Bruno, F.; Muzzupappa, M. A comparative analysis between active and passive techniques for underwater 3D reconstruction of close-range objects. Sensors 2013, 13, 11007–11031. [Google Scholar] [CrossRef] [PubMed]
  11. Jordt-Sedlazeck, A. Underwater 3D Reconstruction Based on Physical Models for Refraction and Underwater Light Propagation. Ph.D. Thesis, Kiel University, Kiel, Germany, 2014. Available online: http://www.inf.uni-kiel.de/de/forschung/publikationen/kcss (accessed on 18 September 2015). [Google Scholar]
  12. Höhle, J. Reconstruction of the Underwater Object. Photogramm. Eng. Remote Sens. 1971, 37, 948–954. [Google Scholar]
  13. Drap, P.; Seinturier, J.; Scaradozzi, D. Photogrammetry for virtual exploration of underwater archeological sites. In Proceedings of the 21st International Symposium, CIPA 2007: AntiCIPAting the Future of the Cultural Past, Athens, Greece, 1–6 October 2007; pp. 1–6.
  14. Gawlik, N. 3D modelling of underwater archaeological artefacts Natalia Gawlik. Master’s Thesis, Norwegian University of Science of Technology, Trondheim, Norway, 2014. Available online: http://hdl.handle.net/11250/233084 (accessed on 18 September 2015). [Google Scholar]
  15. Pope, R.M.; Fry, E.S. Absorption spectrum (380–700 nm) of pure water. Appl. Opt. 1997, 36, 8710–8723. [Google Scholar] [CrossRef] [PubMed]
  16. McGlamery, B.L. Computer Analysis and Simulation of Underwater Camera System Performance; Technical Report; Visibility Laboratory, University of California, San Diego and Scripps Insitution of Oceanography: Oakland, CA, USA, 1975. [Google Scholar]
  17. Jaffe, J.S. Computer modeling and the design of optimal underwater imaging systems. IEEE J. Ocean. Eng. 1990, 15, 101–111. [Google Scholar] [CrossRef]
  18. Schechner, Y.; Karpel, N. Clear underwater vision. In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Washington, DC, USA, 27 June–2 July 2004; pp. 536–543.
  19. Jordt, A.; Koch, R. Refractive calibration of underwater cameras. In Computer Vision–ECCV 2012; Springer Berlin Heidelberg: Berlin, Germany, 2012; pp. 1–14. [Google Scholar]
  20. Treibitz, T.; Schechner, Y.Y.; Kunz, C.; Singh, H. Flat refractive geometry. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 51–65. [Google Scholar] [CrossRef] [PubMed]
  21. Henderson, J.; Pizarro, O.; Johnson-Roberson, M.; Mahon, I. Mapping Submerged Archaeological Sites using Stereo-Vision Photogrammetry. Int. J. Naut. Archaeol. 2013, 42, 243–256. [Google Scholar] [CrossRef]
  22. Gracias, N.; Negahdaripour, S.; Neumann, L.; Prados, R.; Garcia, R. A motion compensated filtering approach to remove sunlight flicker in shallow water images. In Proceedings of the Oceans 2008, Quebec City, QC, Canada, 15–18 September 2008; pp. 1–7.
  23. Slama, C.C.; Theurer, C.; Henriksen, S.W. Manual of Photogrammetry; Va. American Society of Photogrammetry: Bethesda, MD, USA, 1980. [Google Scholar]
  24. Grossberg, M.D.; Nayar, S.K. A general imaging model and a method for finding its parameters. In Proceedings of the Eighth IEEE International Conference on Computer Vision (ICCV), Vancouver, BC, Canada, 7–14 July 2001; pp. 108–115.
  25. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
  26. Tsai, R.Y. A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE J. Robot. Autom. 1987, 3, 323–344. [Google Scholar] [CrossRef]
  27. Brown, L.G. A Survey of Image Registration Techniques. ACM Comput. Surv. 1992, 24, 325–376. [Google Scholar] [CrossRef]
  28. Szelisky, R. Computer Vision: Algorithms and Applications; Springer: London, UK, 2011. [Google Scholar]
  29. Kwon, Y.H. Object Plane Deformation Due to Refraction in Two-Dimensional Underwater Motion Analysis. J. App. Biomech. 1999, 15, 396–403. [Google Scholar]
  30. Zajac, A.; Hecht, E. Optics, 4th ed.; Pearson Higher Education: San Francisco, CA, USA; Massachusetts Institute of Technology: Cambridge, MA, USA, 2003. [Google Scholar]
  31. Refractiveindex.info. Refractive Index and Related Constants—Poly(methyl methacrylate) (PMMA, Acrylic glass). Available online: http://refractiveindex.info/?shelf=organic&book=poly%28methyl_methacrylate%29&page=Szczurowski (accessed on 5 October 2015).
  32. Menna, F.; Nocerino, E.; Troisi, S.; Remondino, F. A photogrammetric approach to survey floating and semi-submerged objects. In Proceedings of the Videometrics, Range Imaging and Applications XII, Munich, Germany, 14–16 May 2013; pp. 1–15.
  33. Kwon, Y.h.; Lindley, S.L. Applicability of the localized-calibration methods in underwater motion analysis. In Proceedings of the Sixteenth International Conference on Biomechanics in Sports, Vienna, Austria, 12–16 July 2000; pp. 1–8.
  34. Kang, L.; Wu, L.; Yang, Y.H. Two-view underwater structure and motion for cameras under flat refractive interfaces. Lect. Notes Comput. Sci. 2012, 7575, 303–316. [Google Scholar]
  35. Yang, T.C. Data-based matched-mode source localization for a moving source. J. Acoust. Soc. Am. 2014, 135, 1218–1230. [Google Scholar] [CrossRef] [PubMed]
  36. Candy, J.; Sullivan, E.J. Model-based identification: An adaptive approach to ocean-acoustic processing. IEEE J. Ocean. Eng. 1996, 21, 273–289. [Google Scholar] [CrossRef]
  37. Buchanan, J.L.; Gilbert, R.P.; Wirgin, A.; Xu, Y. Identification, by the intersecting canonical domain method, of the size, shape and depth of a soft body of revolution located within an acoustic waveguide. Inverse Probl. 2000, 16, 1709–1926. [Google Scholar] [CrossRef]
  38. Pathak, K.; Birk, A.; Vaskevicius, N. Plane-based registration of sonar data for underwater 3D mapping. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010; pp. 4880–4885.
  39. Hurtos, N.; Cufi, X.; Salvi, J. Calibration of optical camera coupled to acoustic multibeam for underwater 3D scene reconstruction. In Proceedings of the MTS/IEEE Oceans, Seattle, WA, USA, 20–23 September 2010; pp. 1–7.
  40. Blueview, T. 3D Mechanical Scanning Sonar. Available online: http://www.blueview.com/products/3d-multibeam-scanning-sonar (accessed on 30 September 2015).
  41. Guo, Y. 3D underwater topography rebuilding based on single beam sonar. In Proceedings of the 2013 IEEE International Conference on Signal Processing, Communication and Computing (ICSPCC 2013), Kunming, China, 5–8 August 2013; pp. 1–5.
  42. Coiras, E.; Petillot, Y.; Lane, D.M. Multiresolution 3-D reconstruction from side-scan sonar images. IEEE Trans. Image Process. 2007, 16, 382–390. [Google Scholar] [CrossRef] [PubMed]
  43. Brahim, N.; Gueriot, D.; Daniel, S.; Solaiman, B. 3D reconstruction of underwater scenes using DIDSON acoustic sonar image sequences through evolutionary algorithms. In Proceedings of the MTS/IEEE Oceans, Santander, Spain, 6–9 June 2011; pp. 1–6.
  44. Negahdaripour, S.; Sekkati, H.; Pirsiavash, H. Opti-acoustic stereo imaging: On system calibration and 3-D target reconstruction. IEEE Trans. Image Process. 2009, 18, 1203–1214. [Google Scholar] [CrossRef] [PubMed]
  45. Babaee, M.; Negahdaripour, S. 3-D Object Modeling from Occluding Contours in Opti-Acoustic Stereo Images. In Proceedings of the MTS/IEEE Oceans, San Diego, CA, USA, 23–27 September 2013; pp. 1–8.
  46. Negahdaripour, S. On 3-D reconstruction from stereo FS sonar imaging. In Proceedings of the MTS/IEEE Oceans, Seattle, WA, USA, 20–23 September 2010; pp. 1–6.
  47. Aykin, M.; Negahdaripour, S. Forward-Look 2-D Sonar Image Formation and 3-D Reconstruction. In Proceedings of the MTS/IEEE Oceans, San Diego, CA, USA, 23–27 September 2013; pp. 1–4.
  48. Murino, V.; Trucco, A.; Regazzoni, C.S. A probabilistic approach to the coupled reconstruction and restoration of underwater acoustic images. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 9–22. [Google Scholar] [CrossRef]
  49. Murino, V. Reconstruction and segmentation of underwater acoustic images combining confidence information in MRF models. Pattern Recognit. 2001, 34, 981–997. [Google Scholar] [CrossRef]
  50. Castellani, U.; Fusiello, A.; Murino, V. Registration of Multiple Acoustic Range Views for Underwater Scene Reconstruction. Comput. Vis. Image Underst. 2002, 87, 78–89. [Google Scholar] [CrossRef]
  51. Kunz, C.; Singh, H. Map Building Fusing Acoustic and Visual Information using Autonomous Underwater Vehicles. J. Field Robot. 2013, 30, 763–783. [Google Scholar] [CrossRef]
  52. Rosenblum, L.; Kamgar-Parsi, B. 3D reconstruction of small underwater objects using high-resolution sonar data. In Proceedings of the 1992 Symposium on Autonomous Underwater Vehicle Technology, Washington, DC, USA, 2–3 June 1992; pp. 228–235.
  53. Reineman, B.D.; Lenain, L.; Castel, D.; Melville, W.K. A Portable Airborne Scanning Lidar System for Ocean and Coastal Applications. J. Atmos. Ocean. Technol. 2009, 26, 2626–2641. [Google Scholar] [CrossRef]
  54. Cadalli, N.; Shargo, P.J.; Munson, D.C., Jr.; Singer, A.C. Three-dimensional tomographic imaging of ocean mines from real and simulated lidar returns. In Proceedings of the SPIE 4488, Ocean Optics: Remote Sensing and Underwater Imaging, San Diego, CA, USA, 29 July 2001; pp. 155–166.
  55. Pellen, F.; Jezequel, V.; Zion, G.; Jeune, B.L. Detection of an underwater target through modulated lidar experiments at grazing incidence in a deep wave basin. Appl. Opt. 2012, 51, 7690–7700. [Google Scholar] [CrossRef] [PubMed]
  56. Mullen, L.; Vieira, A.; Herezfeld, P.; Contarino, V. Application of RADAR technology to aerial LIDAR systems for enhancement of shallow underwater target detection. IEEE Trans. Microw. Theory Tech. 1995, 43, 2370–2377. [Google Scholar] [CrossRef]
  57. Mullen, L.J.; Contarino, V.M. Hybrid LIDAR-radar: Seeing through the scatter. IEEE Microw. Mag. 2000, 1, 42–48. [Google Scholar] [CrossRef]
  58. Moore, K.; Jaffe, J.S.; Ochoa, B. Development of a new underwater bathymetric laser imaging system: L-bath. J. Atmos. Ocean. Technol. 2000, 17, 1106–1117. [Google Scholar] [CrossRef]
  59. Moore, K.; Jaffe, J.S. Time-evolution of high-resolution topographic measurements of the sea floor using a 3-D laser line scan mapping system. IEEE J. Ocean. Eng. 2002, 27, 525–545. [Google Scholar] [CrossRef]
  60. McLeod, D.; Jacobson, J. Autonomous Inspection using an Underwater 3D LiDAR. In Proceedings of the MTS/IEEE Oceans, San Diego, CA, USA, 23–27 September 2013.
  61. Cochenour, B.; Mullen, L.J.; Muth, J. Modulated pulse laser with pseudorandom coding capabilities for underwater ranging, detection, and imaging. Appl. Opt. 2011, 50, 6168–6178. [Google Scholar] [CrossRef] [PubMed]
  62. Rumbaugh, L.; Li, Y.; Bollt, E.; Jemison, W. A 532 nm Chaotic Lidar Transmitter for High Resolution Underwater Ranging and Imaging. In Proceedings of the MTS/IEEE Oceans, San Diego, CA, USA, 23–27 September 2013; pp. 1–6.
  63. De Dominicis, L.; Fornetti, G.; Guarneri, M.; de Collibus, M.F.; Francucci, M.; Nuvoli, M.; Al-Obaidi, A.; Mcstay, D. Structural Monitoring Of Offshore Platforms By 3d Subsea Laser Profilers. In Proceedings of the Offshore Mediterranean Conference, Ravenna, Italy, 20–22 March 2013.
  64. Dalgleish, F.R.; Caimi, F.M.; Britton, W.B.; Andren, C.F. Improved LLS imaging performance in scattering-dominant waters. In Proceedings of the SPIE 7317, Ocean Sensing and Monitoring, Orlando, FL, USA, 29 April 2009.
  65. Gordon, A. Use of Lases Scanning Systems on Mobile Underwater Platforms. In Proceedings of the 1992 Symposium on Autonomous Underwater Vehicle Technology, Washington, DC, USA, 2–3 June 1992; pp. 202–205.
  66. Mullen, L.J.; Contarino, V.M.; Laux, A.; Concannon, B.M.; Davis, J.P.; Strand, M.P.; Coles, B.W. Modulated laser line scanner for enhanced underwater imaging. In Proceedings of the SPIE. 3761, Airborne and In-Water Underwater Imaging, Denver, CO, USA, 28 October 1999; Volume 3761, pp. 2–9.
  67. 3D at Depth. SL1 High Resolution Subsea Laser Scanner. Available online: http://www.3datdepth.com/sl1overview/ (accessed on 30 September 2015).
  68. Salvi, J.; Fernandez, S.; Pribanic, T.; Llado, X. A state of the art in structured light patterns for surface profilometry. Pattern Recognit. 2010, 43, 2666–2680. [Google Scholar] [CrossRef]
  69. Salvi, J.; Pagès, J.; Batlle, J. Pattern codification strategies in structured light systems. Pattern Recognit. 2004, 37, 827–849. [Google Scholar] [CrossRef]
  70. Bruno, F.; Bianco, G.; Muzzupappa, M.; Barone, S.; Razionale, A. Experimentation of structured light and stereo vision for underwater 3D reconstruction. ISPRS J. Photogramm. Remote Sens. 2011, 66, 508–518. [Google Scholar] [CrossRef]
  71. Zhang, S. Recent progresses on real-time 3D shape measurement using digital fringe projection techniques. Opt. Lasers Eng. 2010, 48, 149–158. [Google Scholar] [CrossRef]
  72. Zhang, Q.; Wang, Q.; Hou, Z.; Liu, Y.; Su, X. Three-dimensional shape measurement for an underwater object based on two-dimensional grating pattern projection. Opt. Laser Technol. 2011, 43, 801–805. [Google Scholar] [CrossRef]
  73. Törnblom, N. Underwater 3D Surface Scanning Using Structured Light. Ph.D. Thesis, Uppsala Universitet, Uppsala, Sweden, 2010. Available online: http://www.diva-portal.org/smash/get/diva2:378911/FULLTEXT01.pdf (accessed on 18 September 2015). [Google Scholar]
  74. Narasimhan, S.; Nayar, S. Structured Light Methods for Underwater Imaging: Light Stripe Scanning and Photometric Stereo. In Proceedings of the MTS/IEEE Oceans, Washington, DC, USA, 18–23 September 2005; pp. 1–8.
  75. Dancu, A.; Fourgeaud, M.; Franjcic, Z.; Avetisyan, R. Underwater reconstruction using depth sensors. In SIGGRAPH Asia 2014 Technical Briefs on—SIGGRAPH ASIA’14; ACM Press: New York, NY, USA, 2014; pp. 1–4. [Google Scholar]
  76. Inglis, G.; Smart, C.; Vaughn, I.; Roman, C. A pipeline for structured light bathymetric mapping. In Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Algarve, Portugal, 7–12 October 2012; pp. 4425–4432.
  77. Bodenmann, A.; Thornton, B.; Nakajima, R.; Yamamoto, H.; Ura, T. Wide Area 3D Seafloor Reconstruction and its Application to Sea Fauna Density Mapping. In Proceedings of the MTS/IEEE Oceans, San Diego, CA, USA, 23–27 September 2013; pp. 4–8.
  78. Liu, J.; Jakas, A.; Al-Obaidi, A.; Liu, Y. Practical issues and development of underwater 3D laser scanners. In Proceedings of the 2010 IEEE 15th Conference on Emerging Technologies & Factory Automation (ETFA 2010), Bilbao, Spain, 13–16 September 2010; pp. 1–8.
  79. Roman, C.; Inglis, G.; Rutter, J. Application of structured light imaging for high resolution mapping of underwater archaeological sites. In Proceedings of the MTS/IEEE Oceans, Sydney, Australia, 24–27 May 2010; pp. 1–9.
  80. Jaffe, J.S.; Dunn, C. A Model-Based Comparison Of Underwater Imaging Systems. In Proceedings of the Ocean Optics IX, Orlando, FL, USA, 4 April 1988; pp. 344–350.
  81. Tetlow, S.; Spours, J. Three-dimensional measurement of underwater work sites using structured laser light. Meas. Sci. Technol. 1999, 10, 1162–1167. [Google Scholar] [CrossRef]
  82. Kondo, H.; Maki, T.; Ura, T.; Nose, Y.; Sakamaki, T.; Inaishi, M. Structure tracing with a ranging system using a sheet laser beam. In Proceedings of the 2004 International Symposium on Underwater Technology (IEEE Cat. No.04EX869), Taipei, Taiwan, 20–23 April 2004; pp. 83–88.
  83. Hildebrandt, M.; Kerdels, J.; Albiez, J.; Kirchner, F. A practical underwater 3D-Laserscanner. In Proceedings of the MTS/IEEE Oceans, Quebec City, QC, Canada, 15–18 September 2008; pp. 1–5.
  84. Bodenmann, A.; Thornton, B.; Nakatani, T.; Ura, T. 3D colour reconstruction of a hydrothermally active area using an underwater robot. In Proceedings of the OCEANS 2011, Waikoloa, HI, USA, 19–22 September 2011; pp. 1–6.
  85. Bodenmann, A.; Thornton, B.; Hara, S.; Hioki, K.; Kojima, M.; Ura, T.; Kawato, M.; Fujiwara, Y. Development of 8 m long range imaging technology for generation of wide area colour 3D seafloor reconstructions. In Proceedings of the MTS/IEEE Oceans, Hampton Roads, VA, USA, 14–19 October 2012; pp. 1–4.
  86. Bodenmann, A.; Thornton, B.; Ura, T. Development of long range color imaging for wide area 3D reconstructions of the seafloor. In Proceedings of the 2013 IEEE International Underwater Technology Symposium (UT), Tokyo, Japan, 5–8 March 2013; pp. 1–5.
  87. Nakatani, T.; Li, S.; Ura, T.; Bodenmann, A.; Sakamaki, T. 3D visual modeling of hydrothermal chimneys using a rotary laser scanning system. In Proceedings of the 2011 IEEE Symposium on Underwater Technology and Workshop on Scientific Use of Submarine Cables and Related Technologies, Tokyo, Japan, 5–8 April 2011; pp. 1–5.
  88. Smart Light Devices, S. 3DLS Underwater 3D Laser Imaging Scanner. Available online: http://www.smartlightdevices.co.uk/products/3dlaser-imaging/ (accessed on 30 September 2015).
  89. Prats, M.; Fernandez, J.J.; Sanz, P.J. An approach for semi-autonomous recovery of unknown objects in underwater environments. In Proceedings of the 13th International Conference on Optimization of Electrical and Electronic Equipment (OPTIM), Brasov, Romania, 24–26 May 2012; pp. 1452–1457.
  90. Prats, M.; Fernandez, J.J.; Sanz, P.J. Combining template tracking and laser peak detection for 3D reconstruction and grasping in underwater environments. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Algarve, Portugal, 7–12 October 2012; pp. 106–112.
  91. Sanz, P.J.; Penalver, A.; Sales, J.; Fornas, D.; Fernandez, J.J.; Perez, J.; Bernabe, J. GRASPER: A Multisensory Based Manipulation System for Underwater Operations. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics, Manchester, UK, 13–16 October 2013; pp. 4036–4041.
  92. Cain, C.; Leonessa, A. Laser based rangefinder for underwater applications. In Proceedings of the 2012 American Control Conference (ACC), Montreal, QC, USA, 27–29 June 2012; pp. 6190–6195.
  93. Caccia, M. Laser-Triangulation Optical-Correlation Sensor for ROV Slow Motion Estimation. IEEE J. Ocean. Eng. 2006, 31, 711–727. [Google Scholar] [CrossRef]
  94. Yang, Y.; Zheng, B.; Zheng, H. 3D reconstruction for underwater laser line scanning. In Proceedings of the MTS/IEEE Oceans, Bergen, Norway, 10–14 June 2013; pp. 2008–2010.
  95. Massot-Campos, M.; Oliver-Codina, G. One-shot underwater 3D reconstruction. In Proceedings of the 2014 IEEE Emerging Technology and Factory Automation (ETFA), Barcelona, Spain, 16–19 September 2014; pp. 1–4.
  96. Massot, M.; Oliver, G.; Kemal, H.; Petillot, Y.; Bonin-Font, F. Structured light and stereo vision for underwater 3D reconstruction. In Proceedings of the MTS/IEEE Oceans, Genoa, Italy, 18–21 May 2015.
  97. Massot-Campos, M.; Oliver-Codina, G. Underwater Laser-based Structured Light System for one-shot 3D reconstruction. In Proceedings of the 2014 IEEE Sensors, Valencia, Spain, 2–5 November 2014; pp. 1138–1141.
  98. Tsiotsios, C.; Angelopoulou, M.; Kim, T.K.; Davison, A. Backscatter Compensated Photometric Stereo with 3 Sources. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014; pp. 2259–2266.
  99. Robust Photometric Stereo via Low-Rank Matrix Completion and Recovery. Available online: http://perception.csl.illinois.edu/matrix-rank/stereo.html (accessed on 18 September 2015).
  100. Wen, Z.Y.; Fraser, D.; Lambert, A.; Li, H.D. Reconstruction of underwater image by bispectrum. In Proceedings of the International Conference on Image Processing (ICIP), San Antonio, TX, USA, 16 September–19 October 2007; pp. 545–548.
  101. Jordt-Sedlazeck, A.; Koser, K.; Koch, R. 3D reconstruction based on underwater video from ROV Kiel 6000 considering underwater imaging conditions. In Proceedings of the Proceedings of MTS/IEEE Oceans, Bremen, Germany, 11–14 May 2009; pp. 1–10.
  102. Fischler, M.A.; Bolles, R.C. Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  103. Pizarro, O.; Eustice, R.M.; Singh, H. Large Area 3-D Reconstructions From Underwater Optical Surveys. IEEE J. Ocean. Eng. 2009, 34, 150–169. [Google Scholar] [CrossRef]
  104. Meline, A.; Triboulet, J.; Jouvencel, B. Comparative study of two 3D reconstruction methods for underwater archaeology. In Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Algarve, Portugal, 7–12 October 2012; pp. 740–745.
  105. McKinnon, D.; He, H.; Upcroft, B.; Smith, R. Towards automated and in-situ, near-real time 3-D reconstruction of coral reef environments. In Proceedings of the OCEANS 2011, Waikoloa, HI, USA, 19–22 September 2011; pp. 1–10.
  106. Jordt-Sedlazeck, A.; Koch, R. Refractive Structure-from-Motion on Underwater Images. In Proceedings of the 2013 IEEE International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013; pp. 57–64.
  107. Cocito, S.; Sgorbini, S.; Peirano, A.; Valle, M. 3-D reconstruction of biological objects using underwater video technique and image processing. J. Exp. Mar. Biol. Ecol. 2003, 297, 57–70. [Google Scholar] [CrossRef]
  108. Bruno, F.; Gallo, A.; Muzzupappa, M.; Daviddde Petriaggi, B.; Caputo, P. 3D documentation and monitoring of the experimental cleaning operations in the underwater archaeological site of Baia (Italy). In Proceedings of the Digital Heritage International Congress (DigitalHeritage), Marseille, France, 28 October–1 November 2013; pp. 105–112.
  109. Nicosevici, T.; Gracias, N.; Negahdaripour, S.; Garcia, R. Efficient three-dimensional scene modeling and mosaicing. J. Field Robot. 2009, 26, 759–788. [Google Scholar] [CrossRef]
  110. Ozog, P.; Carlevaris-Bianco, N.; Kim, A.; Eustice, R.M. Long-term Mapping Techniques for Ship Hull Inspection and Surveillance using an Autonomous Underwater Vehicle. J. Field Robot. 2015, 24, 1–25. [Google Scholar] [CrossRef]
  111. Seitz, S.M.; Curless, B.; Diebel, J.; Scharstein, D.; Szeliski, R. A comparison and evaluation of multi-view stereo reconstruction algorithms. In Proceedings of the 2006 IEEE Computer Society Conference on Computer vision and pattern recognition, New York, NY, USA, 17–22 June 2006; pp. 519–528.
  112. Brown, M.Z.; Burschka, D.; Hager, G.D. Advances in computational stereo. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 993–1008. [Google Scholar] [CrossRef]
  113. Barnard, S.T.; Fischler, M.A. Computational stereo. ACM Comput. Surv. 1982, 14, 553–572. [Google Scholar] [CrossRef]
  114. Dhond, U.R.; Aggarwal, J.K. Structure from stereo-a review. IEEE Trans. Syst. Man Cybern. 1989, 19, 1489–1510. [Google Scholar] [CrossRef]
  115. Scharstein, D.; Szeliski, R. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. Int. J. Comput. Vis. 2002, 47, 7–42. [Google Scholar] [CrossRef]
  116. Hartley, R.I.; Zisserman, A. Multiple View Geometry in Computer Vision, 2nd ed.; ISBN 0521540518. Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  117. Ku3, N.S.; Ramakanth Kumar, R. Design & development of autonomous system to build 3D model for underwater objects using stereo vision technique. In Proceedings of the 2011 Annual IEEE India Conference, Hyderabad, India, 16–18 December 2011; pp. 1–4.
  118. Jasiobedzki, P.; Se, S.; Bondy, M.; Jakola, R. Underwater 3D mapping and pose estimation for ROV operations. In Proceedings of the MTS/IEEE Oceans, Quebec City, QC, Canada, 15–18 September 2008; pp. 1–6.
  119. Nurtantio Andono, P.; Mulyanto Yuniarno, E.; Hariadi, M.; Venus, V. 3D reconstruction of under water coral reef images using low cost multi-view cameras. In Proceedings of the 2012 International Conference on Multimedia Computing and Systems, Tangier, Morocco, 10–12 May 2012; pp. 803–808.
  120. Schmidt, V.; Rzhanov, Y. Measurement of micro-bathymetry with a GOPRO underwater stereo camera pair. In Proceedings of the Oceans 2012, Hampton Roads, VA, USA, 14–19 October 2012; pp. 1–6.
  121. Dao, T.D. Underwater 3D Reconstruction from Stereo Images. Msc Erasmus Mundus in Vision and Robotics, University of Girona (Spain), University of Burgundy (France), Heriot Watt University (UK), 2008. Available online: https://fb.docs.com/VA95 (accessed on 18 September 2015).
  122. Brandou, V.; Allais, A.G.; Perrier, M.; Malis, E.; Rives, P.; Sarrazin, J.; Sarradin, P.M. 3D Reconstruction of Natural Underwater Scenes Using the Stereovision System IRIS. In Proceedings of the MTS/IEEE Oceans, Aberdeen, UK, 18–21 June 2007; pp. 1–6.
  123. Beall, C.; Lawrence, B.J.; Ila, V.; Dellaert, F. 3D reconstruction of underwater structures. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010; pp. 4418–4423.
  124. Servos, J.; Smart, M.; Waslander, S.L. Underwater stereo SLAM with refraction correction. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 3350–3355.
  125. Hogue, A.; German, A.; Jenkin, M. Underwater environment reconstruction using stereo and inertial data. In Proceedings of the 2007. ISIC. IEEE International Conference on Systems, Man and Cybernetics, Montreal, QC, Canada, 7–10 October 2007; pp. 2372–2377.
  126. Negre Carrasco, P.L.; Bonin-Font, F.; Oliver-Codina, G. Stereo Graph-SLAM for Autonomous Underwater Vehicles. In Proceedings of the 13th International Conference on Intelligent Autonomous Systems (IAS 2014), Padova, Italy, 15–19 July 2014; pp. 351–360.
  127. Bonin-Font, F.; Cosic, A.; Negre, P.L.; Solbach, M.; Oliver, G. Stereo SLAM for robust dense 3D reconstruction of underwater environments. In OCEANS 2015, Genova, Italy, 18–21 May 2015; pp. 1–6.
  128. Johnson-Roberson, M.; Pizarro, O.; Williams, S.B.; Mahon, I. Generation and visualization of large-scale three-dimensional reconstructions from underwater robotic surveys. J. Field Robot. 2010, 27, 21–51. [Google Scholar] [CrossRef]
  129. Galceran, E.; Campos, R.; Palomeras, N.; Ribas, D.; Carreras, M.; Ridao, P. Coverage Path Planning with Real-time Replanning and Surface Reconstruction for Inspection of Three-dimensional Underwater Structures using Autonomous Underwater Vehicles. J. Field Robot. 2014, 24, 952–983. [Google Scholar] [CrossRef]
  130. González-Rivero, M.; Bongaerts, P.; Beijbom, O.; Pizarro, O.; Friedman, A.; Rodriguez-Ramirez, A.; Upcroft, B.; Laffoley, D.; Kline, D.; Bailhache, C.; et al. The Catlin Seaview Survey—Kilometre-scale seascape assessment, and monitoring of coral reef ecosystems. Aquat. Conserv. Mar. Freshw. Ecosyst. 2014, 24, 184–198. [Google Scholar] [CrossRef]
  131. Pointclouds. System, Robotics and Vision, University of the Balearic Islands. Available online: http://srv.uib.es/pointclouds (accessed on 2 December 2015).
  132. Inglis, G.; Roman, C. Sonar constrained stereo correspondence for three-dimensional seafloor reconstruction. In Proceedings of the MTS/IEEE Oceans, Sydney, Australia, 24–27 May 2010; pp. 1–10.
  133. Zhizhou, W. A discussion about the terminology “photogrammetry and remote sensing”. ISPRS J. Photogramm. Remote Sens. 1989, 44, 169–174. [Google Scholar] [CrossRef]
  134. Luhmann, T.; Robson, S.; Kyle, S.; Boehm, J. Close-Range Photogrammetry and 3D Imaging, 2nd ed.; De Gruyter Textbook; ISBN 9783110302783. John Wiley & Sons Ltd.: Berlin, Germany, 2013. [Google Scholar]
  135. Förstner, W. Uncertainty and projective geometry. In Handbook of Geometric Computing; Springer Berlin Heidelberg: Berlin, Germany, 2005; pp. 493–534. [Google Scholar]
  136. Abdo, D.A.; Seager, J.W.; Harvey, E.S.; McDonald, J.I.; Kendrick, G.A.; Shortis, M.R. Efficiently measuring complex sessile epibenthic organisms using a novel photogrammetric technique. J. Exp. Mar. Biol. Ecol. 2006, 339, 120–133. [Google Scholar] [CrossRef]
  137. Westoby, M.; Brasington, J.; Glasser, N.; Hambrey, M.; Reynolds, J. “Structure-from-Motion” photogrammetry: A low-cost, effective tool for geoscience applications. Geomorphology 2012, 179, 300–314. [Google Scholar] [CrossRef] [Green Version]
  138. Milgram, D.L. Computer methods for creating photomosaics. IEEE Trans. Comput. 1975, 1113–1119. [Google Scholar] [CrossRef]
  139. Milgram, D.L. Adaptive techniques for photomosaicking. IEEE Trans. Comput. 1977, 100, 1175–1180. [Google Scholar] [CrossRef]
  140. Kocak, D.; Jagielo, T.; Wallace, F.; Kloske, J. Remote sensing using laser projection photogrammetry for underwater surveys. In Processings of the IEEE International Geoscience and Remote Sensing Symposium, IGARSS’04, Anchorage, AK, USA, 20–24 September 2004; pp. 1451–1454.
  141. Schewe, H.; Monchrieff, E.; Gründig, L. Improvement of fishfarm pen design using computational structural modelling and large-scale underwater photogrammetry (cosmolup). Int. Arch. Photogramm. Remote Sens. 1996, 31, 524–529. [Google Scholar]
  142. Bouratsis, P.P.; Diplas, P.; Dancey, C.L.; Apsilidis, N. High-resolution 3-D monitoring of evolving sediment beds. Water Resour. Res. 2013, 49, 977–992. [Google Scholar] [CrossRef]
  143. Zhukovsky, M.O.; Kuznetsov, V.D.; Olkhovsky, S.V. Photogrammetric Techniques for 3-D Underwater Record of the Antique Time Ship From Phanagoria. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, 40, 717–721. [Google Scholar] [CrossRef]
  144. Nornes, S.M.; Ludvigsen, M.; Odegard, O.; Sorensen, A.J. Underwater Photogrammetric Mapping of an Intact Standing Steel Wreck with ROV. In Processings of the 4th IFAC Workshop onNavigation, Guidance and Control of Underwater VehiclesNGCUV, Copenhagen, Denmark, 24–26 August 2015; pp. 206–211.
  145. Bingham, B.; Foley, B.; Singh, H.; Camilli, R.; Delaporta, K.; Eustice, R.; Mallios, A.; Mindell, D.; Roman, C.; Sakellariou, D. Robotic tools for deep water archaeology: Surveying an ancient shipwreck with an autonomous underwater vehicle. J. Field Robot. 2010, 27, 702–717. [Google Scholar] [CrossRef]
  146. Fabris, M.; Baldi, P.; Anzidei, M.; Pesci, A.; Bortoluzzi, G.; Aliani, S. High resolution topographic model of Panarea Island by fusion of photogrammetric, lidar and bathymetric digital terrain models. Photogramm. Rec. 2010, 25, 382–401. [Google Scholar] [CrossRef]
  147. Atkinson, K. Close Range Photogrammetry and Machine Vision; Whittles Publishing: London, UK, 1996. [Google Scholar]
  148. Lavest, J.M.; Guichard, F.; Rousseau, C. Multi-view reconstruction combining underwater and air sensors. In Processings of the International Conference on Image, Rochester, NY, USA, 24–28 June 2002; pp. 813–816.
  149. Shortis, M.; Harvey, E.; Seager, J. A Review of the Status and Trends in Underwater Videometric Measurement. Invited paper, SPIE Conference, 2007. Available online: http://www.geomsoft.com/markss/papers/Shortis_etal_paper_Vid_IX.pdf (accessed on 18 September 2015).
  150. De Jesus, K.; de Jesus, K.; Figueiredo, P.; Vilas-Boas, J.A.P.; Fernandes, R.J.; Machado, L.J. Reconstruction Accuracy Assessment of Surface and Underwater 3D Motion Analysis: A New Approach. Comput. Math. Methods Med. 2015, 2015, 1–8. [Google Scholar] [CrossRef] [PubMed]
  151. Leurs, G.; O’Connell, C.P.; Andreotti, S.; Rutzen, M.; Vonk Noordegraaf, H. Risks and advantages of using surface laser photogrammetry on free-ranging marine organisms: A case study on white sharks “Carcharodon carcharias”. J. Fish Biol. 2015, 86, 1713–1728. [Google Scholar] [CrossRef] [PubMed]
  152. Bräuer-Burchardt, C.; Heinze, M.; Schmidt, I.; Kühmstedt, P.; Notni, G. Compact handheld fringe projection based underwater 3D scanner. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, XL-5/W5, 33–39. [Google Scholar] [CrossRef]
  153. Ekkel, T.; Schmik, J.; Luhmann, T.; Hastedt, H. Precise laser-based optical 3D measurement of welding seams under water. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, XL-5/W5, 117–122. [Google Scholar] [CrossRef]
  154. Teledyne CDL. INSCAN 3D Scanning Subsea Laser. Available online: http://teledyne-cdl.com/events/inscan-demonstration-post-press-release (accessed on 30 September 2015).
  155. Hannon, S. Underwater Mapping. LiDAR Mag. 2013, 3, 1–4. [Google Scholar]
  156. 2G Robotics. ULS-100 Underwater Laser Scanner for Short Range Scans. Available online: http://www.2grobotics.com/products/underwater-laser-scanner-uls-100/ (accessed on 30 September 2015).
  157. 2G Robotics. ULS-200 Underwater Laser Scanner for Mid Range Scans. Available online: http://www.2grobotics.com/products/underwater-laser-scanner-uls-200/ (accessed on 30 September 2015).
  158. 2G Robotics. ULS-500 Underwater Laser Scanner for Long Range Scans. Available online: http://www.2grobotics.com/products/underwater-laser-scanner-uls-500/ (accessed on 30 September 2015).
  159. Savante. Cerberus Subsea Laser Pipeline Profiler. Available online: http://www.savante.co.uk/subsea-laser-scanner/cerberus-subsea-laser-pipeline-profiler/ (accessed on 21 September 2015).
  160. Savante. SLV-50 Laser Vernier Caliper. Available online: http://www.savante.co.uk/wp-content/uploads/2015/02/Savante-SLV-50.pdf (accessed on 21 September 2015).
  161. Savante. Lumeneye Subsea Laser Module. Available online: http://www.savante.co.uk/portfolio-items/lumeneye-subsea-line-laser-module/?portfolioID=5142 (accessed on 21 September 2015).
  162. Tritech. Sea Stripe ROV/AUV Laser Line Generator. Available online: http://www.tritech.co.uk/product/rov-auv-laser-line-generator-seastripe (accessed on 30 September 2015).
  163. Campos, R.; Garcia, R.; Alliez, P.; Yvinec, M. A Surface Reconstruction Method for In-Detail Underwater 3D Optical Mapping. Int. J. Robot. Res. 2015, 34, 64–89. [Google Scholar] [CrossRef]
  164. Tan, C.S.; Sluzek, A.; Seet, G.L.; Jiang, T.Y. Range Gated Imaging System for Underwater Robotic Vehicle. In Proceedings of the MTS/IEEE Oceans, Singapore, 16–19 May 2007; pp. 1–6.

Share and Cite

MDPI and ACS Style

Massot-Campos, M.; Oliver-Codina, G. Optical Sensors and Methods for Underwater 3D Reconstruction. Sensors 2015, 15, 31525-31557. https://doi.org/10.3390/s151229864

AMA Style

Massot-Campos M, Oliver-Codina G. Optical Sensors and Methods for Underwater 3D Reconstruction. Sensors. 2015; 15(12):31525-31557. https://doi.org/10.3390/s151229864

Chicago/Turabian Style

Massot-Campos, Miquel, and Gabriel Oliver-Codina. 2015. "Optical Sensors and Methods for Underwater 3D Reconstruction" Sensors 15, no. 12: 31525-31557. https://doi.org/10.3390/s151229864

APA Style

Massot-Campos, M., & Oliver-Codina, G. (2015). Optical Sensors and Methods for Underwater 3D Reconstruction. Sensors, 15(12), 31525-31557. https://doi.org/10.3390/s151229864

Article Metrics

Back to TopTop