Next Article in Journal / Special Issue
Calibration Techniques for Accurate Measurements by Underwater Camera Systems
Previous Article in Journal
A Hybrid Indoor Localization and Navigation System with Map Matching for Pedestrians Using Smartphones
Previous Article in Special Issue
The Bubble Box: Towards an Automated Visual Sensor for 3D Analysis and Characterization of Marine Gas Release Sites
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sinusoidal Wave Estimation Using Photogrammetry and Short Video Sequences

Department of Geodesy and Geoinformation, Technische Universität Wien, Gusshausstrasse 27-29, Vienna 1040, Austria
*
Author to whom correspondence should be addressed.
Sensors 2015, 15(12), 30784-30809; https://doi.org/10.3390/s151229828
Submission received: 26 October 2015 / Accepted: 30 November 2015 / Published: 5 December 2015

Abstract

:
The objective of the work is to model the shape of the sinusoidal shape of regular water waves generated in a laboratory flume. The waves are traveling in time and render a smooth surface, with no white caps or foam. Two methods are proposed, treating the water as a diffuse and specular surface, respectively. In either case, the water is presumed to take the shape of a traveling sine wave, reducing the task of the 3D reconstruction to resolve the wave parameters. The first conceived method performs the modeling part purely in 3D space. Having triangulated the points in a separate phase via bundle adjustment, a sine wave is fitted into the data in a least squares manner. The second method presents a more complete approach for the entire calculation workflow beginning in the image space. The water is perceived as a specular surface, and the traveling specularities are the only observations visible to the cameras, observations that are notably single image. The depth ambiguity is removed given additional constraints encoded within the law of reflection and the modeled parametric surface. The observation and constraint equations compose a single system of equations that is solved with the method of least squares adjustment. The devised approaches are validated against the data coming from a capacitive level sensor and on physical targets floating on the surface. The outcomes agree to a high degree.

Graphical Abstract

1. Introduction

Attempts to characterize the water surface with optical methods date back to the beginning of the 20th century [1,2]. The interest in a quantitative description of the surface with light came from the field of oceanography and the use of photography to map the coastlines. This prompted further applications, namely the use of photography to quantify ocean waves and to exploit these parameters in, e.g., shipbuilding, to engineer structures of the appropriate strength [3,4,5].
The same drivers disseminated optical methods among other applications, in river engineering and the oceanographic domain. Understanding river flow allows for a better riverbed management and mitigation of floods through combined fluid dynamics modeling and experimental testing. Additionally, the knowledge of the dispersive processes gives an insight into the way pollution and sediments are transported [6,7,8,9].
In the coastal zones, optical methods became a good alternative to in situ measurements, which require substantial logistical commitments and offer low spatial, as well as temporal resolution. The dynamics of the water, hence the energy it carries, influences the nearshore morphology, which is of significance for both coastal communities and marine infrastructure, e.g., wharfs and mooring systems [10,11,12,13].
Last, but not least, the roughness of the ocean’s surface is meaningful from the viewpoint of exchange processes with the atmosphere. The oceans, accounting for two-thirds of the Earth’s surface, contain high concentrations of carbon dioxide and thereby influence the global climate system through the evaporation and absorption phenomena. The optically-measured wave slope can be used to parametrize the relationship between the surface roughness and gas transfers, as well as the wind speed and direction [14,15].
Other recent and operational approaches to ocean surface observations are radar altimetry [16] and GNSS reflectometry [17].

1.1. Contributions

The objective of this work was to observe the motion of a floating platform, simultaneously providing information on the forces generating the motion, i.e., the shape of the water waves (cf. Figure 1). The waves were generated with a linear mechanical arm, occupying the entire width of the basin. The wave propagation followed a single direction along the basin’s longer axis. As such, the arm moved at equal time intervals producing sinusoidal waves at various, but constant frequencies (f), periods (ω) and wavelengths (λ); cf. Figure 1. The water was rendered a mirror-like surface, while the testing facility was in large part occupied by a model basin and offered little space around the measurement volume. Hence, the challenges were split between the uncommon surface characteristics, the workplace constraints, as well as the non-professional, budget-conscious imaging equipment employed for the timely observations. The contributions of this article include: (i) a concept for exploiting specular reflections of light sources to model the water surface, (ii) a method for measuring and modeling specular reflections of light sources in an image and 3D reconstruction of the dynamic wave shape from it, plus; (iii) an evaluation of the suggested method using the capacitive level sensor and physical targets freely floating on the surface’s top.
Two methods are proposed, treating the water as a diffuse and as a specular surface, respectively. In either case, the water is presumed to take the shape of a traveling sine wave, reducing the task of the 3D reconstruction to resolve the wave parameters. This was accomplished in a two-fold manner, by observing: (i) a few physical targets floating on the top of the surface (Method 1 in Section 3.1), and (ii) the apparent motion of specular reflections coming from points of known coordinates (Method 2 in Section 3.2). In either case, the transfer from point-based measurements to a surface was possible thanks to the prior knowledge of the wave excitation.
The determination of the platform’s motion is out of scope of this publication and was presented in [18].

1.2. Related Works

One can distinguish between three groups of measurement methods, each exploiting different characteristics of the water medium. The water surface can be regarded as (1) diffuse-like (quasi-Lambertian), (2) mirror-like (specular reflection) or else (3) as a transparent medium (refraction). When the wavelength of the incident radiation is smaller or equal to that of the surface roughness, the surface will appear as diffuse. On the other hand, smooth surfaces will produce specular reflections, and their appearance will be highly dependent on the position of the observer, as well as the radiation source [19]. The majority of approaches utilize passive sensors in reconstructing the surface shape. Active illumination techniques applied to highly reflecting surfaces produce systematic errors in depth estimation. There are few recorded attempts to adapt the scanning techniques to deal with the specularity effects, in particular by space-time analyses of the signal and filtering procedures [20].

1.2.1. Water as a Diffuse Surface

In practice, there are two conditions when the water is rendered diffuse-like: first, when the surface is disturbed with small waves (e.g., rough sea in the coastal zone); second, when artificial targeting is employed. The most common targeting techniques use physical material, like powder, Styropor, oil or optical projections in the forms of a laser sheet, a grid of points or sinusoidal patterns [9,10,21,22,23]. Specular reflections are inevitable and are often the source of errors in the estimated depths. To avoid the corrupted measurements, specular highlights can be: (1) removed in the image preprocessing step, (2) eliminated in multiview setups during the processing (the appearance of glints in images is view dependent; with the third or n-th view, every identified feature can be verified, and glints can be eliminated; the method is apt for scenes with single or few glints) [24], (3) filtered with the help of either polarized or chromatic filters (the filters are mounted in front of the camera lens; hence, there is no restriction on the number of present glints) [25] and (4) in industrial photogrammetry of rigid objects, attenuated through the use of special targets, e.g., doped with fluorescing dye, that respond to a wavelength other than the wavelength of the specular reflections [26].
Alternatively, the water being itself a source of infrared radiation can be observed with thermal cameras. Because the heat distribution is heterogeneous across the surface, it provides a good base for the correspondence search. A surface treated in this way can be measured with classical stereo- or multi-view photogrammetric approaches. If qualitative results are expected, it is sufficient to acquire single images and to proceed with data evaluation in the image space only [27].

1.2.2. Water as a Specular Surface

Sometimes, it is advantageous to exploit the inherent optical characteristics of water, i.e., the total reflection and refraction, for measurement purposes. Contaminating the liquid with physical material is cumbersome, because it becomes (1) unfeasible for large areas and field surveys, (2) difficult to keep a homogeneous point distribution and (3) it may influence the response of the water by interacting with it. In such situations, and depending on the working environment, whether in a lab or out in the field, it is possible to derive the surface shape by mere observation of a reflection of a source light or a pattern whose distortion corresponds to the surface slope and height
Reference [28] pioneered the characterization of water surface slopes with a reflection-based method. Their motive was to analyze slope distributions under different wind speeds through observing the Sun’s glitter on the sea surface from an aerial platform. Variations of the Cox and Munk method include: replacing the natural illumination with one or more artificial light sources, also known as reflective (stereo) slope gauge (RSSG) [3,29], and using entire clear or overcast sky to derive surface slope information for every point in the image, also known as Stilwell photography [30].
A combination of stereo- and reflection-based techniques (RSSG) was proven to be a sound way to characterize not only the surface slopes, but also the water heights. In a typical stereo setting, one is faced with a bias in the corresponding features seen by the left and the right cameras; the reason being that the cameras will record a change from the spots on the water whose normals’ are in the line of sight of the given cameras. Naturally, the steeper the observed wave, the less the systematic error. Using the Helmholtz reciprocity principle, i.e., upgrading the method to employ light sources placed right next to the cameras, eliminates the correspondence ambiguity. The identified features in image spaces are then bound to be a unique feature in the object space [3,4].
In the field of computer vision, two principal classes of algorithms are shape from distortion and shape from specularity, which, e.g., inspect single or multiple highlights with a static or a moving observer and emitter [31,32,33], observe known or unknown intensity patterns reflected from mirror-like surfaces [34,35,36,37,38,39], directly measure the incident rays [40], exploit light polarization [41,42] and make assumptions on the surface’s bidirectional reflectance distribution [43,44,45]. Further interesting approaches that fall outside the scope of the adopted categories exploit other (than the visible) parts of the electromagnetic spectrum, such as infra-red [46], thermal [47] or UV. The principle resembles that of [48,49], i.e., one searches for a wave spectrum, in which the information/signal received from the problematic surfaces, either doped or hit by the energy portion, is maximized, while minimizing the close-by, disturbing signals. For a good overview of the techniques, refer to [50,51].

1.2.3. Water as a Refractive Medium

Refraction-based techniques are more complex due to the fact that the light path is dependent not only on the surface normal, but also the medium’s refraction index. The development of shape from refraction goes hand in hand with the shape from reflection methods and, therefore, has an equally long history. The work in [52], again, first experimented with light refraction to derive surface slopes from an intensity gradient emerging from beneath the water surface. Today, a successor of this technique, called imaging slope gauge, alternatively shape from refractive irradiance, is considered a highly reliable instrument for measuring wind-induced waves in laboratory environments. In contrast to the reflection-based techniques, refraction of a pattern through the water maintains a quasi-linear relationship between the slope and the image irradiance [14,53,54,55].
White light can also be replaced with active laser lighting. The laser offers a greater temporal resolution at the expense of a lower spatial resolution. The first examples of laser imaging gauges employed a single beam, thereby delivering information about a single slope [56,57]. Over time, they evolved to scanning systems that could capture spatial information over an area [58]. A laser slope gauge is used in the laboratory and field environment, however being most apt for the characterization of longer waves, as the literature reports.
In the field of computer vision, dynamic transparent surfaces have been analyzed with single- and multi-image approaches. The work in [59] first introduced a single view shape from the motion of a refractive body. By assuming that the water’s average slope is equal to zero, i.e., it regularly oscillates around a flat plane, the author proposed a method for reconstructing the shape of an undulating water surface by inverse ray tracing. The work in [60] developed a method that recovers complete information of the 3D position and orientation of dynamic, transparent surfaces using stereo observations. The authors formulate an optimization procedure that minimizes a so-called refractive disparity (RD). In short, for every pixel in an image that refracts from an underwater pattern, the algorithm finds a 3D point on the water surface that minimizes the conceived RD measure. This measure expresses Snell’s law of refraction by enforcing that the normal vector of the computed 3D position must be equal when computed from the left and the right images of the stereo image. The two flag examples of computer vision approaches are limited to use in laboratory conditions due to (1) the need to locate a reference pattern under the water and (2) the demand of clear water for a reliable pattern to image correspondence retrieval. For more information, refer to [50,51].

2. Preliminaries

The traveling 3D sine wave is deemed (note that the shift term H 0 in Equation (2) is omitted; this is possible if the translation from G C S L C S already compensates for that shift):
y ( t i ) = y i
z ( t i ) = A sin [ 2 π ( t i T - y i λ ) + ϕ ] , i = 1 N
where ϕ is the phase of the wave-front and N is the duration of the measurement in seconds. The traveling sine wave is a special surface, and as such, it is advantageously modeled in a local coordinate system ( L C S ), that is parallel to the wave propagation direction and shifted to the mean level of oscillations (cf. Figure 1 and Figure 6). In order to link the L C S with the global coordinate system ( G C S ) where the cameras are defined, a 3D spatial similarity transformation (the scale factor is unity; thus, the transformation reduces to a 3D rigid transformation) is formulated. The parameters of this transformation (i.e., three components of the translation vector and three rotation angles) could likewise be included within the adjustment.
The methods invented devise the above-defined model in a least square approach, that is trying to minimize the residuals between the nominal observations and the observations predicted by that model. To solve the non-linear least squares problem, one must know the starting values of all parameters involved. Presented below is the strategy for retrieving the wave amplitude A, the wavelength λ, period T, phase-shift ϕ and the rigid transformation of the wave L C S to the camera G C S . Experience has shown that unless the amplitude is infinitesimally small and the wave infinitely long, even very rough estimates of the unknown wave parameters assure convergence of the system of equations.
Throughout the text, the notion of real and virtual points appears. The points are real if their projections in images come directly from the points (e.g., physical targets) or virtual (i.e., specular highlights) if the camera sees merely the reflection of a real point. Virtual points are always single-image observations, as their positions in space depend on the shape of the surface from which it is reflected and the view angle of the camera (vide the law of reflection).
Figure 1. A simplistic view of the model basin. The platform is placed at a distance from the wave generator (on the left) and parallel to the wave propagation direction.
Figure 1. A simplistic view of the model basin. The platform is placed at a distance from the wave generator (on the left) and parallel to the wave propagation direction.
Sensors 15 29828 g001

2.1. Derivation of Approximate Wave Parameters

The wave amplitude can be: (i) estimated visually, on-site, during the measurement taking place, (ii) recovered from the wave probes, as these are a common place in any ship model basin, or (iii) derived from image-based measurement, provided there are observed real points found on the water surface (adopted by the authors). When the image-based approach is undertaken, the triangulation step must be followed to obtain the 3D coordinates of the real points (cf. Section 3.1.2). The amplitude can be recovered with the complex amplitude demodulation method (AMD) or merely by removing the trend from a point’s response and taking the halved maximum bound to be the starting A value. The collateral benefit of the AMD is that, were there is a varying amplitude signal, a slope instead of a horizontal line outcome would be observable. Accordingly, the A shall be replaced with a (linear) time-varying function, the parameters of which may be engaged in the total adjustment, as well.
Similarly, the value of the period T can be approximated either from on-site visual impressions, using the image data in post-processing (adopted by the authors), or taken directly from the wave probe data. The dominant period is then restored with the help of the spectral analyses, i.e., the periodogram power spectral density estimate.
The wavelength might be inspected visually or with the help of the method presented in Section 3.1 (adopted by the authors). The wave probe devices, being single-point-based measurements, do not deliver enough data to indicate the length of the traveling wave.
The remaining wave parameter is the phase shift. Its computation requires a real point floating on the water surface, or otherwise, the wave probe data can become useful if its position is known in the reference coordinate system (CS) (adopted by the authors). If one moves the CS to that point, the term y j λ in Equation (2) cancels out. If one further takes the starting frame t 1 = 0 , also the term t 1 T is gone, and the phase shift can be computed from ϕ = arcsin z j A , where j denotes the point at the origin of the translated CS. A slightly more elegant way to solve the equation for the initial phase shift is to use, again, the demodulation technique.
As for the wave transformation parameters ( ω , Φ , κ ; T ) G C S L C S , one usually tries to define the global system that is quasi-parallel to the wave propagation direction or in best case that aligns with it. If so, the rigid transformation parameters can be set to zero values at the beginning of the adjustment and receive their corrections, which compensate for the inaccurate alignment. The image data and the scene context must be exploited to find the transformation relating the two coordinate system by identifying, e.g., the minimum of three common points or a 3D line and a point.

2.2. Optimization Technique

In the computational part, the Gauss–Markov least squares adjustment with conditions and constraints was adopted. The adjustment workflow proceeds in repetitive cycles of five steps, i.e., (i) the generation of current approximate values of all unknown parameters, (ii) the calculation of the reduced observation vector, (iii) the calculation of the partial derivatives of the functional model with respect to all current parameters, (iv) the construction of the normal equations and (v) the solution of the system of equations.
The partial derivatives that make up the Jacobian matrix are always evaluated at the current values of the parameters. If the system of equations includes both condition and constraint equations, it does not fulfil the positive-definite requirement put by, e.g., the Cholesky decomposition. Indeed, on the diagonal of the equation matrix in Method 2, zero entries are present. The vector of solutions is then retrieved with the Gauss elimination algorithm.

3. Methods

3.1. Water as a Diffuse Surface

The following method sees the water as a diffuse surface. It is converted to such, owing to artificial targeting. A set of retro-reflective targets floated on the water surface were tracked during the measurements (cf. Figure 2). The targets were in-house produced using a diamond grade reflective sheeting (3M T M , Minneapolis, Minnesota, USA). The sheet, thanks to the cube corner reflectors’ composition, allowed for an efficient light return for very wide entrance angles, thereby assuring good visibility in the oblique looking camera views. The targets were interconnected with a string, so as to avoid their collision and dispersion; their spacing equaled ca. 30 cm.
Reconstruction of a singular point provides for all but one wave parameter: the wavelength λ. Combining the responses of the minimum of two such points allows for the possibility of recovering also the remaining λ. It is presumed that the 3D data have already been transformed to the LCS, and to shorten the discussion, this is excluded from the mathematical model. A complete description on how to include this information in the model is given in Section 3.2.
Since the parameters are found in a least squares approach, the discussion commences with the initial parameter retrieval. In the next step, the adjustment mathematical model is outlined. Evaluation of the results continues in Section 5.
Figure 2. Left: physical, retro-reflective targets floating in the vicinity of the platform are marked with the red box. Right: a close-up of the targets; the arrows indicate established cluster pairs.
Figure 2. Left: physical, retro-reflective targets floating in the vicinity of the platform are marked with the red box. Right: a close-up of the targets; the arrows indicate established cluster pairs.
Sensors 15 29828 g002

3.1.1. Mathematical Model

The functional model describes analytically the prior knowledge on the wave shape expressed in Equation (2). The modeling part, unlike in Method 2 in Section 3.2, is formulated purely in 3D space. The point triangulation is treated as a separate and unrelated phase, even though the image measurements indirectly contribute to the eventual outcome, and a joint treatment might be suggested. As a matter of fact, the 3D space is reduced to a 2D space, building on the fact that the transformation from GCS to LCS has taken place in advance, and the x-coordinate can take arbitrary values. All parameters are present in the adjustment as observations and unknowns; see the adjustment workflow in Figure 3. The stochastic model is formalized within the weight matrix and conveys the observation uncertainty. The matrix holds non-zero elements on its diagonal, and they may take the following form: w i = σ 0 2 σ i 2 , where σ i signifies the a priori standard deviation of an observation and σ 0 is the a priori standard deviation of a unit weight. Within the experiments, the σ 0 was set to unity, whereas σ A = 2 mm, σ T = 0 . 05 s, σ λ = 100 mm, σ ϕ = 0 . 25 rad. These values are rough and rather pessimistic estimates of the uncertainty of the approximate parameters.
The condition equations are formed by all parameters that are regarded as observed, i.e., y i , z i , A, T, λ, ϕ, and follow Equation (6). Every observed point i provides three condition equations: y ^ i = y i , z ^ i = z i and Equation (2).
Figure 3. The least squares adjustment workflow of Method 1. The cycle repeats until the stop criterion is reached. x denotes the entire vector of parameters; A is the Jacobian in observation equations; and C l l is the covariance matrix. See f w a v e , f p a r a m in Equation (2) and Equation (6).
Figure 3. The least squares adjustment workflow of Method 1. The cycle repeats until the stop criterion is reached. x denotes the entire vector of parameters; A is the Jacobian in observation equations; and C l l is the covariance matrix. See f w a v e , f p a r a m in Equation (2) and Equation (6).
Sensors 15 29828 g003

3.1.2. Image-Based Approximate Wave Retrieval

The starting point is to measure the targets in images, for instance with centroiding methods, ellipse fitting or cross-correlation [61,62,63]. The authors adopted the intensity centroiding method to detect points in the initial frame and the sub-pixel cross-correlation to track them in time [64]. The 2D image measurements are then transferred to 3D space in a regular bundle adjustment and exploited to recover the initial wave parameters. The developed pipeline is fully automatic and summarized in the following order:
  • clustering of 3D points;
  • coupling of neighboring clusters;
  • calculation of mean A, T and ϕ from the clusters;
  • calculation of mean λ from the couples of clusters.
The reconstructed targets in time are considered an unorganized point cloud; thus, their clustering is carried out up front (cf. Figure 2). The retrieved clusters are equivalent to the responses of a single target floating on the water surface. The clustering per se is not required, as this piece of information is already carried within the point naming convention. Nonetheless, because this may not always be the case, the clustering is a default operation. It creates boundaries in 3D space based on some measure of proximity; in our case, the Euclidean measure was chosen. The algorithm was invented by [65] and implemented in [66].
Now, the coupling establishes a neighborhood relationship between closest clusters (cf. Figure 2). Given a kd-tree representation of all clusters’ centroids, the algorithm searches for the neighbors within a desired radius and ascertains that the selected pair has an offset along the wave propagation direction. This later permits for the computation of the wavelength λ. The selected radius should not be too small, but cannot be greater than the wavelength, to be able to resolve its length.
The mean A, T and ϕ are found within each single cluster, as pointed out in Section 2.1. To find the mean λ, the requirement is to know: (i) the period T, (ii) the direction of the wave propagation, and (iii) the relation between the GCS and the LCS, where the wave is defined. For every couple of clusters, one counts how much time Δ t a wave crest takes to travel between the clusters (cf. Figure 4). The distance Δ d (in LCS) between them is known and so is the period T, hence the wavelength estimate results from the trivial proportion:
Δ d Δ t = λ T
Figure 4. Image-based approximate wave retrieval. For this cluster pair, the wave crest takes eight frames ( Δ t ) to travel between the neighbors. If the distance between the clusters is Δ y = 10 cm and the period T = 12 , then the wavelength λ = 15 cm.
Figure 4. Image-based approximate wave retrieval. For this cluster pair, the wave crest takes eight frames ( Δ t ) to travel between the neighbors. If the distance between the clusters is Δ y = 10 cm and the period T = 12 , then the wavelength λ = 15 cm.
Sensors 15 29828 g004

3.2. Water as a Specular Surface

The second developed method exploits the fact that water under some conditions can be perceived as specularly reflecting, i.e., manifesting no scattering on its surface. Two parallel arrays of linear lamps hang above the measurement space (cf. Figure 8). Their reflections on top of the water surface could be observed as static, in the steady wave condition, and dynamic, when the water was excited. In the latter case, the reflections project to distorted shapes (cf. Figure 5). Such deformations implicitly carry information about the instantaneous shape of the water surface and are investigated in the successive paragraphs.
Because the specularities (also known as highlights) travel with the observer and depend on the surface geometry, no corresponding features in different cameras are present [33,51,67]. As a result, no stereo or multi-view measurements are made possible. Unless one is able to directly identify 3D rays that would intersect at the interface of a surface [40], the alternative solution to the depth-normal ambiguity is to add control information and/or impose appropriate constraints in 3D object space.
In the developed approach, the images of specular highlights and a number of parameter constraints are combined together to recover the water’s instantaneous state. This method solves a least squares problem, simultaneously determining all parameters of interest. The discussion opens with the condition equations and imposed constraints, which constitute the functional model of the LS problem. Next, the adjustment procedure is explicitly given, including: (i) the derivation of approximate values for all unknowns, (ii) the stochastic model, (iii) the system equation forming, as well as (iv) the collection of control information. Lastly, the experimental section presents the results followed by a compact conclusive paragraph.
Figure 5. Specular reflections of the control information seen by two cameras. Top: calm water condition; the water can be considered a plane, and linear features map to linear reflections. Bottom: water in the excited condition; linear features map to deformed reflections.
Figure 5. Specular reflections of the control information seen by two cameras. Top: calm water condition; the water can be considered a plane, and linear features map to linear reflections. Bottom: water in the excited condition; linear features map to deformed reflections.
Sensors 15 29828 g005

3.2.1. Mathematical Model

The functional model comprises the mathematical description of two observed phenomena, that is the perspective imaging associated with the camera system, as well as the shape of the induced waves, which in turn associates with a wave maker.
The camera to object points relation was modeled with the collinearity equations. The shape of the induced waves was modeled with Equation (2). The defined wave model is accompanied by three constraint equations. They impose that: (i) virtual points lie on the wave surface ( f s u r f ) (their distance from the surface = 0), (ii) for all virtual points, the incident and reflected rays make equal angles with respect to the tangent/normal at that point ( f r e f l ) (compliance with the law of reflection), and (iii) the vector from the camera to a virtual point, the normal at the point and the vector from that point towards its 3D real position are coplanar ( f c o p l ) (compliance with the law of reflection). The real points are always considered as ground control information; therefore, the developed method belongs to the class of calibrated environment methods. See the adjustment workflow in Figure 7.
Apart from what has been so far discussed in Section 3.1.1, the stochastic model avoids having the solution driven by the observations that are most abundant. It limits the influence of a particular group of observations with the help of a second weighting matrix N m a x . Here, every group of observations was assigned a value n m a x that limits its participation in the adjustment to below n m a x observations. The diminishing effect is realized by the expression in Equation (4) and found on the diagonal of the matrix. The n o b s is the cardinality of the observations within a group. The ultimate weight matrix W is a multiplication, W = W · N m a x . The bespoke weighting strategy is implemented within MicMac, an open source bundle adjustment software [68,69].
n m a x o b s , i = n o b s n m a x n o b s + n m a x N o b s , i = 1 N
Within the experiments, the σ 0 value was always set to unity, whereas σ A = 1 mm, σ λ = 10 mm, σ T = 0 . 05 s, σ ϕ = 0 . 25 rad, σ x y = 0 . 5 pix, σ X Y Z r e a l = 5 mm, σ X Y Z v i r t u a l = 10 mm, σ T G C S L C S = 25 mm and σ ω , ϕ , κ G C S L C S = 0 . 01 rad. In analogy to Method 1, these values are rough estimates of the estimated approximate parameters’ values. If the parameter setting shall be unclear, a means of assessing the correctness of the a priori values must be employed, e.g., the variance component analysis.
Figure 6. Graphical representation of the functional model. The camera captures the scene following the laws of perspective projection. The wave model, displayed at a time instance, is defined through parameters A, λ, T and ϕ in L C S . The camera (O), the virtual point P and real point R lie on a common plane π, whereas the incident and reflecting rays are symmetric w.r.t. the normal N .
Figure 6. Graphical representation of the functional model. The camera captures the scene following the laws of perspective projection. The wave model, displayed at a time instance, is defined through parameters A, λ, T and ϕ in L C S . The camera (O), the virtual point P and real point R lie on a common plane π, whereas the incident and reflecting rays are symmetric w.r.t. the normal N .
Sensors 15 29828 g006
Figure 7. The least squares adjustment workflow of Method 2. The cycle repeats until the stop criterion is reached. x denotes the entire vector of parameters; A is the Jacobian in observations equations; H is the Jacobian in constraint equations; k is the Lagrange multiplier; and C l l is the covariance matrix. See f p a r a m , f s u r f , f r e f l , f c o p l in Equations (6)–(9), respectively.
Figure 7. The least squares adjustment workflow of Method 2. The cycle repeats until the stop criterion is reached. x denotes the entire vector of parameters; A is the Jacobian in observations equations; H is the Jacobian in constraint equations; k is the Lagrange multiplier; and C l l is the covariance matrix. See f p a r a m , f s u r f , f r e f l , f c o p l in Equations (6)–(9), respectively.
Sensors 15 29828 g007
Condition equations are functions of observations and parameters. Collinearity equations are self-evidently observations as a function of parameters: the 3D coordinates of the real or virtual point ( I O R and E O R in the developed implementation were treated as constants). Equation (5) renders the collinearity expanded into Taylor series around the N initial estimates X Y Z i 0 :
f ( X Y Z ) c o l = f c o l ( X Y Z i 0 ) + f c o l ( X Y Z ) i 0 d ( X Y Z ) i , i = 1 N
Optionally, one may define originally free parameters as observed unknowns. This trick helps to include any available knowledge of the unknowns into the pipeline, as well as to avoid surplus parameter updates, i.e., steer the rate of parameter change along the iterations. The parameters to control the rate are the entries of the weight matrix W. Our implementation allows all parameters to be regarded as observed; therefore, any parameter in Figure 7 can be replaced with the p a r a m in Equation (6). For instance, if an X-coordinate is observed, the condition equation X ^ = X o b s + v x and the correction equation X = X o b s + 1 · d X are written down, where v x and d X are the observation and the parameter corrections.
f p a r a m = f 0 , i p a r a m + f p a r a m ( p a r a m ) i 0 d ( p a r a m ) i , i = 1 N p a r a m
Constraint equations do not involve observations, but pure parameters. The conceived wave model renders three constraint equations:
(i)
f s u r f ,
f s u r f = - z * ( t i ) + A sin [ 2 π ( t i T - y i * λ ) + ϕ ] = 0 ,
(ii)
f r e f l ,
f r e f l = ff O P - ff P R = arccos O P N | O P | | N | - arccos P R N | P R | | N | = 0
(iii)
f c o p l ,
f c o p l = O P ( P R × N ) = O P x * O P y * O P z * R P x * R P y * R P z * N x * N x * N x * = 0
where O P = [ x O * - x P * y O * - y P * z O * - z P * ] , P R = [ x R * - x P * y R * - y P * z R * - z P * ] , y * = y * ( X , Y , Z , ω , Φ , κ ; T ) and z * = z * ( X , Y , Z , ω , Φ , κ ; T ) . The constraints are defined locally; thus, coordinate quantities are annexed with the symbol * . The values determined in L C S are not considered in the adjustment, but are obtained after a 3D rigid transformation with the parameters ( ω , Φ , κ ; T ) G C S L C S .
The linearized forms of the above equations, expanded into Taylor series, are presented in Appendix A. Note that the local coordinate quantities x * , y * , z * are functions of their positions X , Y , Z in the G C S , as well as the parameters of the 3D rigid transformation. As a result, the derivatives are calculated for a composition of functions and must obey the chain rule.

3.2.2. Derivation of Control Information

The control information was not acquired physically prior to nor during the measurements. Not even posterior efforts were undertaken to collect the ground truth. The position of the linear lamps (cf. Figure 8), which served as the ground truth information, was recovered solely using the image data, under the condition that the reflecting water is globally a plane. As each measurement started with the calm water condition, the planarity condition was valid at numerous times. The imaging situation is depicted in Figure 9.
Figure 8. Two arrays of lamps are marked with red bounding boxes. The end points of the single lamps served as the control information.
Figure 8. Two arrays of lamps are marked with red bounding boxes. The end points of the single lamps served as the control information.
Sensors 15 29828 g008
Figure 9. The imaging configuration taking place when deriving the control information. R represents the end-point of a linear lamp; R is its virtual location, at d distance behind the water plane π. The camera to real-point rays follow the law of reflection, i.e., the incident and reflecting rays form equal angles with the plane normal N .
Figure 9. The imaging configuration taking place when deriving the control information. R represents the end-point of a linear lamp; R is its virtual location, at d distance behind the water plane π. The camera to real-point rays follow the law of reflection, i.e., the incident and reflecting rays form equal angles with the plane normal N .
Sensors 15 29828 g009
The calculation of the X Y Z coordinates of the control information in their real locations divides into: (i) the water plane derivation (from real points), (ii) the identification of homologous points across views and triangulation (virtual points), and lastly (iii) the flipping of the virtual points to their real positions.
The plane π of the water was recovered thanks to well-distributed dust particles present on its surface. Their appearance was sufficiently discriminative for their identification across views. Alternatively, one could place artificial targets on top of the water to avoid potential identification problems. Given a few ( 3 ) pairs or triples of 2D image points corresponding to real features, their 3D position is found by intersection. The searched plane defined analytically as A x + B y + C z + D = 0 is then established by singular value decomposition (SVD).
The end points of the linear lamp reflections were identified and measured in images manually only in the initial frame. The subsequent tracking in time was realized by the flagship cross-correlation technique. Having found and measured the reflections, their 3D locations are triangulated ( R in Figure 9), ignoring the fact that the observed features are not real. The 3D points emerge on the wrong side of the water plane; thus, they must be flipped to their real positions. The flipping is done with respect to an arbitrary plane, being the water plane and determined by the coefficients A , B , C and D. The transformation performing that operation works by:
(i)
roto-translating the global coordinate to a local coordinate system that aligns with the flipping plane,
T · R 1 = 1 0 0 0 0 1 0 0 0 0 1 0 - X i - Y i - Z i 1 · λ | N | 0 n x | N | 0 - n x n y λ | N | n z λ n y | N | 0 - n x n z λ | N | - n y λ n z | N | 0 0 0 0 1
where N = [ n x n y n z ], λ = n y 2 + n z 2 , and [ X i Y i Z i ] are 3D coordinates of any points lying within the flipping plane.
(ii)
performing the actual flipping over the local X Y - plane:
R 2 = 1 0 0 0 0 1 0 0 0 0 - 1 0 0 0 0 1
(iii)
and bringing the point back to the global coordinate system with R 1 - 1 and T - 1 .
The entire procedure committed in a single formula renders:
M = T · R 1 · R 2 · R 1 - 1 · T - 1

3.2.3. Derivation of the Approximate Highlight Position

The highlights’ coordinates ( P in Figure 6) result from the intersection of the approximate wave model with the vectors anchored in the image measurements, passing through the camera perspective center and extending into the object space. The highlights are single-image observations, so there exist no correspondences across images, as in the case of real points. The intersection points are found first by intersecting with the mean water plane and then by iteratively improving the results with Newton’s method. The points are first found in the L C S and subsequently transferred to the G C S given the approximate parameters of the rigid transformation. The algorithm is presented below.
Given the 3D vector defined by points ( x 1 * , y 1 * , z 1 * ) and ( x 2 * , y 2 * , z 2 * ) at the camera center and observed in image space, respectively, the 3D line parametric equation takes the form:
x * y * z * = x 1 * y 1 * z 1 * + t · x 2 * - x 1 * y 2 * - y 1 * z 2 * - z 1 *
The sought y * - coordinate of the intersection is then equal to y * = y 1 * + t · ( y 2 * - y 1 * ) where t = ( z * - z 1 * ) / ( z 2 * - z 1 * ) . The unique solution can be obtained when the z in the preceding equation is replaced with the mean water level, e.g., H 0 = 0 in the L C S . A better approximation can be accomplished if the intersection is performed with a more realistic model than the plane: the observed sine wave. Combining Equation (2) with the last row of Equation (13), such that the z * terms are equal, brings about the following relationship:
g ( y * ) = y * - y 1 * - ( y 2 * - y 1 * ) ( z 2 * - z 1 * ) · ( H 0 + A sin [ 2 π ( t T - y * λ ) + ϕ ] - z 1 * ) = 0
The function g has one parameter y * , and as such, together with its derivative g , both evaluated at the current parameter value y 0 * , they enter Newton’s method, which finds the ultimate root. The Newton step is the ratio of g ( y 0 * ) and g ( y 0 * ) (cf. Equation (15)). The loop continues until the difference between old and new parameter estimates no longer falls below a defined threshold.
y * = y 0 * - g ( y 0 * ) g ( y 0 * )
Once the y * value is known, the z * -coordinate is computed from Equation (2), and lastly, the x * can be retrieved from the 3D line equation as x * = x 1 * + z * - z 1 * z 2 * - z 1 * · ( x 2 * - x 1 * ) . The final step brings the locally-determined coordinates to the global ones with ( ω , Φ , κ ; T ) G C S L C S .

4. Imaging System

The imaging setup is comprised of three dSLR cameras (Canon 60D, 20-mm focal length) and three continuous illumination sources (1250 W). The spatial resolution of the videos matched the full HD (1920 × 1080 pix), acquiring a maximum of 30 fps in progressive mode. The video files were lossy compressed with the H.264 codec and saved in a .mov container.
The mean object to camera distance amounted to 10 m, resulting in the average image scale of 1:500. The cameras were rigidly mounted on a mobile bridge (cf. Figure 10 and Figure 11) and connected with each other, as well as with a PC, via USB cables to allow for: (i) remote triggering, (ii) coarse synchronization. Fine-alignment of the video frames was possible with the help of a laser dot observed in all cameras. The laser worked in a flicker mode, at a frequency lower than that of the video acquisition, and casually moved over the floating platform’s surface, both at the start and the finish of each acquisition. No automatization was incorporated at this stage; instead, the alignment was conducted manually.
Figure 10. Red circles point to three cameras placed on a moving platform, across the model basin. The blue rectangles point to three scale bars. The remaining scale bars are symmetrically arranged on the opposite side of the basin.
Figure 10. Red circles point to three cameras placed on a moving platform, across the model basin. The blue rectangles point to three scale bars. The remaining scale bars are symmetrically arranged on the opposite side of the basin.
Sensors 15 29828 g010
Figure 11. The top view of the measurement workplace. Cameras on top of the drawing are placed on a mobile bridge. The platform occupies the scene’s central part and is located ca. 5 m away from the bridge. The green rectilinear shapes correspond to the zones of the artificial targets (used in Method 1, and in the evaluation of Methods 1 and 2; see Section 3.1 and Section 5), while the red circle is the capacitive level sensor (used in the evaluation of Methods 1 and 2; see Section 5).
Figure 11. The top view of the measurement workplace. Cameras on top of the drawing are placed on a mobile bridge. The platform occupies the scene’s central part and is located ca. 5 m away from the bridge. The green rectilinear shapes correspond to the zones of the artificial targets (used in Method 1, and in the evaluation of Methods 1 and 2; see Section 3.1 and Section 5), while the red circle is the capacitive level sensor (used in the evaluation of Methods 1 and 2; see Section 5).
Sensors 15 29828 g011
Despite the USB connections, the videos were stored on the memory cards. No spatial reference field was embedded in the vicinity of the system; instead, the calibration and orientation was carried out with the moved reference bar method [70].

5. Experiments

5.1. Evaluation strategy

Results achieved with Method 1 (m1) and Method 2 (m2) are confronted with the responses of a capacitive level sensor and validating physical targets (cf. Figure 11). The capacitive level sensor was mounted on a rod-like probe and sensed the variations in electrical capacity within the sensor. Given the dielectric constant of the liquid, this information can be directly transformed to the changes in the water level, in which the probe is normally immersed. Because the sensor samples the changes in a singular spot, it provides merely information on the amplitude and frequency of the water level oscillations; likewise validating the targets. The instantaneous wavelength, thereby, remained unknown, as no direct means to judge the accuracy of the calculated wavelength existed. Indirectly, the correctness of all wave parameters, including the wavelength, can be estimated by confronting the response of a number of points distributed along the wave propagation ( v t 1 , v t 2 , c l s ) with their responses predicted from the model.
In the adjustment, m1 adopted three clusters, whereas m2 tracked up to seven highlights, corresponding to four to six ground control points (i.e., the lamps’ endpoints). The distribution of measured and validating points is displayed in Figure 12. Numerical results of the five measurement series are summarized in Table 1, with the graphical representations provided in Figure 13, Figure 14, Figure 15, Figure 16, Figure 17 and Figure 18. The third and fourth measurement series was evaluated twice with varied image observations (specular highlights; cf. Figure 12). The adopted evaluation strategy is as follows.

5.1.1. Accuracy 1: Validating Targets ( v t 1 , v t 2 )

With validating targets, we refer to points that were not treated in the adjustments aiming at finding the wave parameters. They were measured in images and independently intersected in 3D space. The Z-response of all validating targets is confronted with the value predicted from the devised wave model. The validation takes place in the LCS. Figure 13 and Figure 14 illustrate the results projected onto the traveling sine wave. The red corresponds to the response from the target; the blue is the model outcome. The normal probability plots test and confirm that the residuals follow the normal distribution.

5.1.2. Accuracy 2: The Capacitive Level Sensor ( c l s )

To compare the data collected by the capacitive level sensor and the image-based measurement system, temporal synchronization and data resampling had to take place. The start of the capacitive level sensor ( c l s ) data collection was conveyed to the cameras audiovisually, by switching the light on and by emitting vocal sound. This allowed for rough temporal synchronization. To fine align the two signals, cross-correlation was carried out. The frequency of c l s data collection was double the frequency of the camera recording; therefore, to equalize the acquisition rates, every other sample of the c l s device was discarded. Figure 18 illustrates the results of the comparisons.
Figure 12. Distribution of observed highlights (blue) used in m2, artificial targets used in m1 (magenta), validating targets (green) and the capacitive level sensor (red) in three camera views, in all measurement series.
Figure 12. Distribution of observed highlights (blue) used in m2, artificial targets used in m1 (magenta), validating targets (green) and the capacitive level sensor (red) in three camera views, in all measurement series.
Sensors 15 29828 g012
Figure 13. Accuracy 1 validation results in the first measurement series for m 1 in (a) and m 2 in (b), compared against the response of v t 1 and v t 2 . Ground truth in red; fitted wave model in blue. The normal probability plots are in even columns.
Figure 13. Accuracy 1 validation results in the first measurement series for m 1 in (a) and m 2 in (b), compared against the response of v t 1 and v t 2 . Ground truth in red; fitted wave model in blue. The normal probability plots are in even columns.
Sensors 15 29828 g013
Table 1. Mean precision ( σ x , y , z ) and accuracy ( ( 1 ) R M S E z , ( 2 ) R M S E z ) of five measurement series evaluated with Method 1 and Method 2.
Table 1. Mean precision ( σ x , y , z ) and accuracy ( ( 1 ) R M S E z , ( 2 ) R M S E z ) of five measurement series evaluated with Method 1 and Method 2.
Sensors 15 29828 i001
Figure 14. Accuracy 1 validation results in the second measurement series for m 1 in (a) and m 2 in (b), compared against the response of v t 1 and v t 2 . Ground truth in red; fitted wave model in blue. The normal probability plots are in even columns.
Figure 14. Accuracy 1 validation results in the second measurement series for m 1 in (a) and m 2 in (b), compared against the response of v t 1 and v t 2 . Ground truth in red; fitted wave model in blue. The normal probability plots are in even columns.
Sensors 15 29828 g014
Figure 15. Accuracy 1 validation results in the third measurement series for m 1 in (a) and m 2 in (b) and (c), compared against the response of v t 1 and v t 2 . Ground truth in red; fitted wave model in blue. The normal probability plots are in even columns.
Figure 15. Accuracy 1 validation results in the third measurement series for m 1 in (a) and m 2 in (b) and (c), compared against the response of v t 1 and v t 2 . Ground truth in red; fitted wave model in blue. The normal probability plots are in even columns.
Sensors 15 29828 g015
Figure 16. Accuracy 1 validation results in fourth measurement series for m 1 in (a) and m 2 in (b) and (c), compared against the response of v t 1 and v t 2 . Ground truth in red; fitted wave model in blue. The normal probability plots are in even columns.
Figure 16. Accuracy 1 validation results in fourth measurement series for m 1 in (a) and m 2 in (b) and (c), compared against the response of v t 1 and v t 2 . Ground truth in red; fitted wave model in blue. The normal probability plots are in even columns.
Sensors 15 29828 g016
Figure 17. Accuracy 1 validation results in the fifth measurement series for m 1 in (a) and m 2 in (b), compared against the response of v t 1 and v t 2 . Ground truth in red; fitted wave model in blue. The normal probability plots are in even columns.
Figure 17. Accuracy 1 validation results in the fifth measurement series for m 1 in (a) and m 2 in (b), compared against the response of v t 1 and v t 2 . Ground truth in red; fitted wave model in blue. The normal probability plots are in even columns.
Sensors 15 29828 g017
Figure 18. Accuracy 2 validation results for m 1 and m 2 in the 1–5 measurement series depicted in ae, respectively. In red, the c l s response; in blue the m 1 , m 2 responses. All comparisons are carried out at the position of point c l s .
Figure 18. Accuracy 2 validation results for m 1 and m 2 in the 1–5 measurement series depicted in ae, respectively. In red, the c l s response; in blue the m 1 , m 2 responses. All comparisons are carried out at the position of point c l s .
Sensors 15 29828 g018

5.2. Discussion

The results achieved were confronted with the responses of a capacitive level sensor and two validating targets, all of which provided single-point responses, and were placed in various positions across the water basin. In an overall assessment, the specular method (Method 2) proved superior with respect to the diffuse method (Method 1).

5.2.1. Accuracy

Method 1 performs well locally, when validated on points in the vicinity of the cluster pair ( v t 1 ), but as soon as it is confronted with distant points ( v t 2 ), modeling errors significantly grow; compare, e.g., Figure 13a at v t 1 and v t 2 . Method 2 employs the entire water field in the computation and therefore has a global scope with the absence of extrapolation effects, and the modeling errors are more consistent, yet have a slightly higher magnitude. It shall be noted that the v t 1 and c l s placed on either end of the basin were under the influence of the principal sine wave, as well as the waves reflected from the basin side walls. The platform floating in the middle also disturbed the principal wave shape. The third and fourth measurement series evaluated with Method 2 on the highlights observed at the top of the basin (series 3 a m 2 and 4 a m 2 ) and around the platform (series 3 b m 2 and 4 b m 2 ) proved that the wave, having faced an obstacle, decreases its amplitude and wavelength. Compared the significant deviations of the blue/red curves in Figure 15b and Figure 16b at v t 2 , as well as Figure 18c,d at the c l s , as opposed to Figure 15c and Figure 16c and Figure 18c,d respectively. This is of high importance in interpreting the behavior of the platform. To do that, one must know the form of the water body just before it hits the platform and not some distance before that interaction, since it no longer corresponds to the real force put on that object.
Evaluation results on c l s suggest that the wave form changed spatio-temporally. It was systematically attenuated with increasing distance from the generating source. The c l s was mounted closer to the wave maker than v t 1 , v t 2 , other artificial targets or the highlights and, consequently, measured higher wave amplitudes; compare the subfigures of, e.g., Figure 18c or d. Wave superposition effects (the principal and reflected waves) could contribute to higher amplitudes, as well.

5.2.2. Precision

Precision measures should not be interpreted as a single quality measure. As the evaluation proved, they are too optimistic when confronted with the accuracy measures. Moreover, the covariance matrices in Method 1 return a standard deviation homogeneous in all coordinates, while Method 2 manifests large uncertainty in the y-coordinate. This is due to the simplified and rigorous modeling of Methods 1 and 2, respectively. Method 2 simultaneously treats the reconstruction and the modeling task, whereas Method 1 performs merely the modeling, with no special treatment of the preceding steps, other than the known a priori standard deviation expressed in the weight matrix.
The inferior precision on the y-coordinate in Method 2 is a side effect of suboptimal network design, with a good base across the model basin (x-coordinate) and practically no shift along the y-axis (see the definition of the coordinate system in Figure 11). In spite of no parallax on the z-coordinate, the precision figures along that axis are satisfying, due to the introduced water surface model.

5.2.3. Wave Parameters

The wave parameters calculated with Method 1 and Method 2 differ, most seriously for the wavelength parameter. The differences are more pronounced at very long waves with small amplitudes (Series 3 and 4) and less evident at shorter waves or high amplitudes (Series 1, 2 and 5). In cases of small amplitude to wavelength ratios, the stability of the solution ought to be brought into discussion. Nonetheless, the subject has not been given further insight within this work.

6. Conclusions

Measurement of a difficult surface was approached with the collinearity equation, treating it as diffuse, and with the piecewise linear equations, when restituted solely from the specular reflections visible on its surface, but coming from a set of lamps hung from the ceiling. The accuracies obtained on physical targets floating on the surface, counted for the entire acquisition length, were more favorable for the latter method, falling between 1 and 3 mm.
The concept of using the lamps’ reflections for metrology was partially driven by the fact that lamps in ship testing facilities appear predominantly in the same configurations. They provide a calibration field at no labor cost, of high reliability, making the methodology universal and easy to re-apply. The diffuse method, on the contrary, necessitates extra work to establish a net of targets to be placed on the water. The points are then prone to sinking, partial submergence, occlusions or drift.
The superiority of the specular over the diffuse approach is in full-field versus single-point shape modeling. To install a net of points that spans a large area is infeasible; therefore, one is constrained to local observations. On the contrary, the number of specular highlights is a multiplication of every point in the calibration field by the number of cameras within the imaging system. Their allocation over the measurement volume is steerable by the camera placement. If, however, bound to using the diffuse approach and aiming at full-field data collection, eliciting the water shape over extended surfaces may be possible through the adoption of patches of nets in strategic areas. In both conditions, large field modeling demands very careful planning, especially for complex-shaped surfaces. The model definition must contain just enough parameters, whereas the observations ought to deliver enough data for their recovery.
A noteworthy aspect of the specular approach is the magnifying effect present on the water surface. Observed highlights undergo an apparent motion under the deforming surface shape. The motion magnitude and trajectory are known from the law of reflection. This depends on the camera to surface relation and the relation of the point at which reflection is observed (in its real position) to its reflection on the surface (virtual position). By modifying the distance between the reflecting surface and the calibration field (real points), the motion magnitude is changed proportionally. Put differently, very small surface deformations can render large highlight displacements for a sufficiently distant calibration field.
An important issue to consider when doing least squares adjustment, true for the diffuse and specular methods, is the initial approximations of all unknowns. Unless their values are known well enough, the success of the adjustment is put under question. At small amplitudes, the longer the wave is, the more precise must be the approximations. If approximations are imprecise, divergence or convergence to an incorrect solution is highly probable. Reliability measures output from the covariance matrices of the adjustment may serve to evaluate the credibility of the results. However, this has not been investigated within this work.
The model of the wave shape assumes single-frequency oscillations. In large field observations, this assumption is often violated, as has been observed in the presented work. If the assumption is violated and the model becomes insufficient to describe the phenomena, one may still: (i) use the simple model to observe the surface locally, or (ii) extend it to involve wave time-varying components, eventually modeling its shape with a wave being the sum of two elementary waves.

Acknowledgments

This research was possible thanks to the support of the Austrian Research Promotion Agency (FFG) and the Doctoral College Energy Systems 2030 (ENSYS2030) at Vienna University of Technology.

Author Contributions

Ewelina Rupnik concieved and designed the experiments, implemented the methods, and analyzed the data; Josef Jansa and Norbert Pfeifer supervised the research, including on-site experiments, development, and critical analysis.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

f s u r f = f 0 , i s u r f + f s u r f A d A + f s u r f T d T + f s u r f λ d λ + f s u r f ϕ d ϕ + f s u r f y i * d y i * + f s u r f z i * d z i * = f 0 , i s u r f + f s u r f A d A + f s u r f T d T + f s u r f λ d λ + f s u r f ϕ d ϕ + K { X i , Y i , Z i , ω , Φ , κ , T G C S L C S } f s u r f y i * · y i * K d K + f s u r f z i * · z i * K d K f s u r f X , f s u r f Y , f s u r f Z , f s u r f ω , f s u r f Φ , f s u r f κ , f s u r f T G C S L C S
f r e f l = f 0 , i r e f l + f r e f l x O * d x O * + f r e f l y O * d y O * + f r e f l z O * d z O * + f r e f l x R * d x R * + f r e f l y R * d y R * + f r e f l z R * d z R * + f r e f l N x * , i d N x * , i + f r e f l N y * , i d N y * , i + f r e f l N z * , i d N z * , i = f 0 , i r e f l + j { O , R } K { ω , Φ , κ , T G C S L C S } f r e f l x j * · x j * K d K + f r e f l y j * · y j * K d K + f r e f l z j * · z j * K d K + j { x * , y * , z * } K { X i , Y i , Z i , ω , Φ , κ , T G C S L C S } f r e f l N j , i · N j , i K d K
f c o p l = f 0 , i c o p l + f c o p l x O * d x O * + f c o p l y O * d y O * + f c o p l z O * d z O * + f c o p l x R * d x R * + f c o p l y R * d y R * + f c o p l z R * d z R * + f c o p l N x * , i d N x * , i + f c o p l N y * , i d N y * , i + f c o p l N z * , i d N z * , i = f 0 , i c o p l + j { O , R } K { ω , Φ , κ , T G C S L C S } f c o p l x j * · x j * K d K + f c o p l y j * · y j * K d K + f c o p l z j * · z j * K d K + j { x * , y * , z * } K { X i , Y i , Z i , ω , Φ , κ , T G C S L C S } f c o p l N j , i · N j , i K d K
where:
N x * , i = f s u r f y i * · y i * X i + f s u r f z i * · z i * X i N y * , i = f s u r f y i * · y i * Y i + f s u r f z i * · z i * Y i N z * , i = f s u r f y i * · y i * Z i + f s u r f z i * · z i * Z i

References

  1. Kohlschütter, E. Stereophotogrammetrische Arbeiten, Wellen und Küstenaufnahmen. In Forschungsreise S.M.S Planet, 3; Verlag von Karl Siegismund: Berlin, Germany, 1906. (in German) [Google Scholar]
  2. Laas, W. Messung der Meereswellen und ihre Bedeutung für den Schiffbau. In Jahrbuch der Schiffbautechnischen Gesellschaft; Springer: Berlin, Germany, 1906; pp. 391–407. (in German) [Google Scholar]
  3. Waas, S. Entwicklung eines Verfahrens zur Messung kombinierter Höhen- und Neigungsverteilungen von Wasseroberflächenwellen mit Stereoaufnahmen. M.Sc. Thesis, Ruperto-Carola University of Heidelberg, Heidelberg, Germany, 1988. [Google Scholar]
  4. Kiefhaber, D. Optical Measurement of Short Wind Waves–From the Laboratory to the Field. Ph.D. Thesis, Ruperto-Carola University of Heidelberg, Heidelberg, Germany, 2014. [Google Scholar]
  5. Nocerino, E.; Ackermann, S.; Del Pizzo, S.; Menna, F.; Troisi, S. Low-cost human motion capture system for postural analysis onboard ships. In Proceedings of Videometrics, Range Imaging, and Applications XII, Munich, Germany, 14–15 May 2011; pp. 80850L–80856L.
  6. Stojic, M.; Chandler, J.; Ashmore, P.; Luce, J. The assessment of sediment transport rates by automated digital photogrammetry. Photogr. Eng. & Rem. Sens. 1998, 64, 387–395. [Google Scholar]
  7. Godding, R.; Hentschel, B.; Kauppert, K. Videometrie im wasserbaulichen Versuchswesen. Wasserwirtschaft, Wassertechnik 2003, 4, 36–40. (in German). [Google Scholar]
  8. Chandler, J.; Wackrow, R.; Sun, X.; Shiono, K.; Rameshwaran, P. Measuring a dynamic and flooding river surface by close range digital photogrammetry. In Proceedings of ISPRS Int. Arch. Photogram. Rem. Sens. Spat. Inform. Sci., Beijing, China, 3–11 July 2008; pp. 211–216.
  9. Mulsow, C.; Maas, H.G.; Westfeld, P.; Schulze, M. Triangulation methods for height profile measurements on instationary water surfaces. JAG 2008, 2, 21–29. [Google Scholar] [CrossRef]
  10. Adams, L.; Pos, J. Wave height measurements in model harbours using close range photogrammetry. In Proceedings of 15th Congress of the Int. Soc. for Photogram. Rem. Sens., Rio de Janeiro, Brazil, 17–29 June 1984.
  11. Redondo, J.; Rodriguez, A.; Bahia, E.; Falques, A.; Gracia, V.; Sánchez-Arcilla, A.; Stive, M. Image analysis of surf zone hydrodynamics. In Proceedings of International Conference on the Role of the Large Scale Experiments in Coastal Research, Barcelona, Spain, 21–25 February 1994.
  12. Aarninkhof, S.G.; Turner, I.L.; Dronkers, T.D.; Caljouw, M.; Nipius, L. A video-based technique for mapping intertidal beach bathymetry. Coast. Eng. 2003, 49, 275–289. [Google Scholar] [CrossRef]
  13. Santel, F. Automatische Bestimmung von Wasseroberflächen in der Brandungszone aus Bildsequenzen mittels digitaler Bildzuordnung. Ph.D Thesis, Fachrichtung Geodäsie und Geoinformatik University Hannover, Hannover, Germany, 2006. [Google Scholar]
  14. Jähne, B.; Klinke, J.; Waas, S. Imaging of short ocean wind waves: A critical theoretical review. J. Opt. Soc. Am. 1994, 11, 2197–2209. [Google Scholar] [CrossRef]
  15. Kiefhaber, D. Development of a Reflective Stereo Slope Gauge for the measurement of ocean surface wave slope statistics. M.Sc. Thesis, Ruperto-Carola University of Heidelberg, Heidelberg, Germany, 2010. [Google Scholar]
  16. Wahr, J.; Smeed, D.A.; Leuliette, E.; Swenson, S. Seasonal variability of the Red Sea, from satellite gravity, radar altimetry, and in situ observations. J. Geophys. Res. Oceans 2014, 119, 5091–5104. [Google Scholar] [CrossRef]
  17. Schiavulli, D.; Lenti, F.; Nunziata, F.; Pugliano, G.; Migliaccio, M. Landweber method in Hilbert and Banach spaces to reconstruct the NRCS field from GNSS-R measurements. Int. J. Remote Sens. 2014, 35, 3782–3796. [Google Scholar] [CrossRef]
  18. Rupnik, E.; Jansa, J. Off-the-shelf videogrammetry—A success story. In Proceedings of ISPRS Int. Arch. Photogram. Rem. Sens. Spat. Inform. Sci., Munich, Germany, 14–15 May 2014; Volume 43, pp. 99–105.
  19. Nayar, S.K.; Ikeuchi, K.; Kanade, T. Surface reflection: Physical and geometrical perspectives. IEEE Trans. Pat. Anal. Mach. Intel. 1991, 13, 611–634. [Google Scholar] [CrossRef]
  20. Park, J.; Kak, A.C. 3D modeling of optically challenging objects. IEEE Comp. Graph. 2008, 14, 246–262. [Google Scholar] [CrossRef] [PubMed]
  21. Maresca, J.W.; Seibel, E. Terrestrial photogrammetric measurements of breaking waves and longshore currents in the nearshore zone. Coast. Eng. 1976. [Google Scholar] [CrossRef]
  22. Piepmeier, J.A.; Waters, J. Analysis of stereo vision-based measurements of laboratory water waves. In Proceedings of Geoscience and Remote Sensing Symposium, Anchorage, Alaska, USA, 20–24 September 2004; Volume 5, pp. 3588–3591.
  23. Cobelli, P.J.; Maurel, A.; Pagneux, V.; Petitjeans, P. Global measurement of water waves by Fourier transform profilometry. Exp. Fluids 2009, 46, 1037–1047. [Google Scholar] [CrossRef]
  24. Bhat, D.N.; Nayar, S.K. Stereo in the Presence of Specular Reflection. In Proceedings of the 5th International Conference on Computer Vision (ICCV), Boston, Massachusetts, USA, 20–23 June 1995; pp. 1086–1092.
  25. Wells, J.M.; Danehy, P.M. Polarization and Color Filtering Applied to Enhance Photogrammetric Measurements of Reflective Surface. In Proceedings of Structures, Structural Dynamics and Materials Conference, Austin, TX, USA, 18–21 April 2005.
  26. Black, J.T.; Blandino, J.R.; Jones, T.W.; Danehy, P.M.; Dorrington, A.A. Dot-Projection Photogrammetry and Videogrammetry of Gossamer Space Structures. Technical Report NASA/TM-2003-212146; NASA Langley Research Center: Hampton, VA, USA, 2003. [Google Scholar]
  27. Lippmann, T.; Holman, R. Wave group modulations in cross-shore breaking patterns. Coastal Eng. 1992. [Google Scholar] [CrossRef]
  28. Cox, C.; Munk, W. Measurement of the roughness of the sea surface from photographs of the sun’s glitter. J. Opt. Soc. Am. 1954, 44, 838–850. [Google Scholar] [CrossRef]
  29. Schooley, A.H. A simple optical method for measuring the statistical distribution of water surface slopes. J. Opt. Soc. Am. 1954, 44, 37–40. [Google Scholar] [CrossRef]
  30. Stilwell, D. Directional energy spectra of the sea from photographs. J. Geophys. Res. 1969, 74, 1974–1986. [Google Scholar] [CrossRef]
  31. Ikeuchi, K. Determining Surface Orientation of Specular Surfaces by Using the Photometric Stereo Method. IEEE Trans. Pat. Anal. Mach. Intel. 1981, 3, 661–669. [Google Scholar] [CrossRef]
  32. Healey, G.; Binford, T.O. Local shape from specularity. Comput. Vis. Graph. Image Process. 1988, 42, 62–86. [Google Scholar] [CrossRef]
  33. Sanderson, A.C.; Weiss, L.E.; Nayar, S.K. Structured highlight inspection of specular surfaces. IEEE Trans. Pat. Anal. Mach. Intel. 1988, 10, 44–55. [Google Scholar] [CrossRef]
  34. Halstead, M.A.; Barsky, B.A.; Klein, S.A.; Mandell, R.B. Reconstructing curved surfaces from specular reflection patterns using spline surface fitting of normals. In Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, New Orleans, LA, USA, 4–9 August 1996; pp. 335–342.
  35. Savarese, S.; Perona, P. Local analysis for the 3rd reconstruction of specular surfaces. Comp. Vis. Pat. Recog. 2001, 2, 738–745. [Google Scholar]
  36. Bonfort, T.; Sturm, P. Voxel carving for specular surfaces. In Proceedings of the 9th International Conference on Computer Vision (ICCV), Nice, France, 13–16 October 2003; pp. 591–596.
  37. Roth, S.; Black, M.J. Specular flow and the recovery of surface structure. IEEE Comp. Soc. Conf. Comp. Vis. Pat. Recog. 2006, 2, 1869–1876. [Google Scholar]
  38. Adato, Y.; Vasilyev, Y.; Ben-Shahar, O.; Zickler, T. Toward a theory of shape from specular flow. In Proceedings of IEEE 11th International Conference on Computer Vision, Rio de Janeiro, Brazil, 14–21 October 2007; pp. 1–8.
  39. Sankaranarayanan, A.C.; Veeraraghavan, A.; Tuzel, O.; Agrawal, A. Specular surface reconstruction from sparse reflection correspondences. In Proceedings of IEEE 23rd Conference on Computer Vision and Pattern Recognition, San Francisco, LA, USA, 13–18 June 2010; pp. 1245–1252.
  40. Kutulakos, K.N.; Steger, E. A theory of refractive and specular 3d shape by light-path triangulation. Int. J. Comp. Vis. 2008, 76, 13–29. [Google Scholar] [CrossRef]
  41. Ma, W.C.; Hawkins, T.; Peers, P.; Chabert, C.F.; Weiss, M.; Debevec, P. Rapid acquisition of specular and diffuse normal maps from polarized spherical gradient illumination. In Proceedings of the 18th Eurographics conference on Rendering Techniques, Eurographics Association, Aire-la-Ville, Switzerland, 2007; pp. 183–194.
  42. Stolz, C.; Ferraton, M.; Meriaudeau, F. Shape from polarization: A method for solving zenithal angle ambiguity. Opt. Let. 2012, 37, 4218–4220. [Google Scholar] [CrossRef] [PubMed]
  43. Hertzmann, A.; Seitz, S.M. Example-based photometric stereo: Shape reconstruction with general, varying brdfs. IEEE Trans. Pat. Anal. Mach. Intel. 2005, 27, 1254–1264. [Google Scholar] [CrossRef] [PubMed]
  44. Lensch, H.P.A.; Kautz, J.; Goesele, M.; Heidrich, W.; Seidel, H.P. Image-based Reconstruction of Spatial Appearance and Geometric Detail. ACM Trans. Graph. 2003, 22, 234–257. [Google Scholar] [CrossRef]
  45. Wang, J.; Dana, K.J. Relief texture from specularities. IEEE Trans. Pat. Anal. Mach. Intel. 2006, 28, 446–457. [Google Scholar]
  46. Meriaudeau, F.; Sanchez Secades, L.; Eren, G.; Ercil, A.; Truchetet, F.; Aubreton, O.; Fofi, D. 3D Scanning of Non-Opaque Objects by means of Imaging Emitted Structured Infrared Patterns. IEEE Trans. Instrum. Meas. 2010, 59, 2898–2906. [Google Scholar] [CrossRef]
  47. Eren, G.; Aubreton, O.; Meriaudeau, F.; Sanchez Secades, L.A.; Fofi, D.; Truchetet, F.; Ercil, A. Scanning From Heating: 3D Shape Estimation of Transparent Objects from Local Surface Heating. Opt. Express 2009, 17, 11457–11468. [Google Scholar] [CrossRef] [PubMed]
  48. Hilsenstein, V. Design and implementation of a passive stereo-infrared imaging system for the surface reconstruction of Water Waves. Ph.D Thesis, Ruperto-Carola University of Heidelberg, Heildeberg, Germany, 2004. [Google Scholar]
  49. Rantoson, R.; Stolz, C.; Fofi, D.; Mériaudeau, F. 3D reconstruction of transparent objects exploiting surface fluorescence caused by UV irradiation. In Proceedings of 17th IEEE International Conference on Image Processing, Hong Kong, 26–29 September 2010; pp. 2965–2968.
  50. Ihrke, I.; Kutulakos, K.N.; Lensch, H.P.; Magnor, M.; Heidrich, W. State of the art in transparent and specular object reconstruction. EUROGRAPHICS STAR 2008. [Google Scholar]
  51. Morris, N.J. Shape Estimation under General Reflectance and Transparency. PhD Thesis, University of Toronto, Toronto, Ontario, Canada, 2011. [Google Scholar]
  52. Cox, C.S. Measurement of slopes of high-frequency wind waves. J. Mar. Res. 1958, 16, 199–225. [Google Scholar]
  53. Jähne, B.; Riemer, K. Two-dimensional wave number spectra of small-scale water surface waves. J. Geophys. Res.: Oceans (1978–2012) 1990, 95, 11531–11546. [Google Scholar] [CrossRef]
  54. Zhang, X.; Cox, C.S. Measuring the two-dimensional structure of a wavy water surface optically: A surface gradient detector. Exp. Fluids 1994, 17, 225–237. [Google Scholar] [CrossRef]
  55. Rocholz, R. Spatiotemporal Measurement of Shortwind-Driven Water Waves. Ph.D Thesis, Ruperto-Carola University of Heidelberg, Heidelberg, Germany, 2008. [Google Scholar]
  56. Sturtevant, B. Optical depth gauge for laboratory studies of water waves. Rev. Scient. Instr. 1966, 37. [Google Scholar] [CrossRef]
  57. Hughes, B.; Grant, H.; Chappell, R. A fast response surface-wave slope meter and measured wind-wave moments. Deep Sea Res. 1977, 24, 1211–1223. [Google Scholar] [CrossRef]
  58. Bock, E.J.; Hara, T. Optical measurements of capillary-gravity wave spectra using a scanning laser slope gauge. J. Atm. Ocean. Tech. 1995, 12, 395–403. [Google Scholar] [CrossRef]
  59. Murase, H. Surface Shape Recontruction of a Nonrigid Transparent Object Using Refraction and Motion. IEEE Trans. Pat. Anal. Mach. Intel. 1992, 14, 1045–1052. [Google Scholar] [CrossRef]
  60. Morris, N.J.; Kutulakos, K.N. Dynamic Refraction Stereo. IEEE Trans. Pat. Anal. Mach. Intel. 2011, 33, 1518–1531. [Google Scholar] [CrossRef] [PubMed]
  61. Shortis, M.R.; Clarke, T.A.; Robson, S. Practical testing of the precision and accuracy of target image centering algorithms. Videometrics IV, SPIE: Philadelphia, PA, USA,, October 1995; pp. 65–76. [Google Scholar]
  62. Otepka, J. Precision Target Mensuration in Vision Metrology. Ph.D Thesis, Technische Universtiät Wien, Wien, Austria, 2004. [Google Scholar]
  63. Wiora, G.; Babrou, P.; Männer, R. Real time high speed measurement of photogrammetric targets. In Pattern Recognition; Springer: Berlin Heidelberg, Germany, 2004; pp. 562–569. [Google Scholar]
  64. Kraus, K. Photogrammetry. In Advanced Methods and Applications; Dümmler Verlag: Bonn, Germany, 1997; Volume 2. [Google Scholar]
  65. Rusu, R.B. Semantic 3D Object Maps for Everyday Manipulation in Human Living Environments. Ph.D Thesis, Computer Science department, Technische Universität Muenchen, Munich, Germany, 2009. [Google Scholar]
  66. Point Cloud Library (PCL). Available online: http://pointclouds.org/ (accessed on November 2014).
  67. Blake, A.; Brelstaff, G. Geometry From Specularities. In Proceedings of International Conference on Computer Vision (ICCV), Tampa, Florida, USA, December 1988; pp. 394–403.
  68. MicMac, Apero, Pastis and Other Beverages in a Nutshell! Available online: http://logiciels.ign.fr/?Telechargement (accessed on 20 March 2015).
  69. Deseilligny-Pierrot, M.; Clery, I. Apero, an open source bundle adjustment software for automatic calibration and orientation of set of images. ISPRS Int. Arch. Photogram. Rem. Sens. Spat. Inform. Sci. 2011, 38, 269–276. [Google Scholar]
  70. Maas, H.G. Image sequence based automatic multi-camera system calibration techniques. J. Photogram. Rem. Sens. 1999, 54, 352–359. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Rupnik, E.; Jansa, J.; Pfeifer, N. Sinusoidal Wave Estimation Using Photogrammetry and Short Video Sequences. Sensors 2015, 15, 30784-30809. https://doi.org/10.3390/s151229828

AMA Style

Rupnik E, Jansa J, Pfeifer N. Sinusoidal Wave Estimation Using Photogrammetry and Short Video Sequences. Sensors. 2015; 15(12):30784-30809. https://doi.org/10.3390/s151229828

Chicago/Turabian Style

Rupnik, Ewelina, Josef Jansa, and Norbert Pfeifer. 2015. "Sinusoidal Wave Estimation Using Photogrammetry and Short Video Sequences" Sensors 15, no. 12: 30784-30809. https://doi.org/10.3390/s151229828

APA Style

Rupnik, E., Jansa, J., & Pfeifer, N. (2015). Sinusoidal Wave Estimation Using Photogrammetry and Short Video Sequences. Sensors, 15(12), 30784-30809. https://doi.org/10.3390/s151229828

Article Metrics

Back to TopTop