Next Article in Journal
Effects of Additional Xylanase on Saccharification and Ethanol Fermentation of Ammonia-Pretreated Corn Stover and Rice Straw
Previous Article in Journal
Hydrochars as Emerging Biofuels: Recent Advances and Application of Artificial Neural Networks for the Prediction of Heating Values
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Performance of 3D Wave Field Modeling Using the Staggered Grid Finite Difference Method with General-Purpose Processors

Department of Geoinformatics and Applied Computer Science, AGH—University of Science and Technology, 30-059 Krakow, Poland
*
Author to whom correspondence should be addressed.
Energies 2020, 13(17), 4573; https://doi.org/10.3390/en13174573
Submission received: 30 July 2020 / Revised: 26 August 2020 / Accepted: 1 September 2020 / Published: 3 September 2020
(This article belongs to the Section A3: Wind, Wave and Tidal Energy)

Abstract

:
This paper aims to provide a quantitative understanding of the performance of numerical modeling of a wave field equation using general-purpose processors. In particular, this article presents the most important aspects related to the memory workloads and execution time of the numerical modeling of both acoustic and fully elastic waves in isotropic and anisotropic mediums. The results presented in this article were calculated for the staggered grid finite difference method. Our results show that the more realistic the seismic wave simulations that are performed, the more the demand for memory and the computational capacity of the computing environment increases. The results presented in this article allow the estimation of the memory requirements and computational time of wavefield modeling for the considered model (acoustic, elastic or anisotropic) so that their feasibility can be assessed in a given computing environment and within an acceptable time. Understanding the numerical modeling performance is especially important when graphical processing units (GPU) are utilized to satisfy the intensive calculations of three-dimensional seismic forward modeling.

1. Introduction

The safety of the power industry plays a key role in building the independence and safety of each country. Fossil fuel deposits owned by the government are national virtues and reinforce a country’s position in the world. All countries intensively work on the recognition of new fuel fossil deposits. Presently, in several European countries, the broadly discussed topic is “shale gas” and its significance in the balance of the energy demand of a country.
The most important for any quantitative assessment are deposits, well-recognized and prepared for exploitation. One of the most expensive phases in the deposit’s preparation for exploitation is the detailed recognition of space range and deposit quality. The most important tool for underground deposit recognition is the seismic prospecting method. Apart from field measurements, seismic wave field interpretation is the most costly and time-consuming part of the method. It is well known that seismic measurement for oil and gas industry needs is the most demanding method as far as computer capabilities (together with military and weather prediction). Massive computation is engaged for the interpretation of seismic measurements, manipulated with a few terabytes of the data which is a typical size of seismic gather. In a single seismic experiment, there are few hundreds of seismic gathers that enable the approximation of the scale and volume of the computation problem. The essential for a solution is the parallelization of the computational algorithm and algorithm reconstruction to engage graphic card environment.
One of the most important stages of seismic measurements interpretation is the numerical wave field modeling. The results of numerical modeling are used to support the interpretation of field data and to perform sensitivity analysis related to the detectability of petrophysical variables such as porosity, fluid type and fluid saturation; they can also be used to improve understanding of the Earth’s interior.
Crucial for the method and its effectiveness in deposit recognition are modelings performed with a model of a geologic medium as realistic as possible. The homogeneous and isotropic medium is not used in this case. The simplest model is an isotropic and heterogeneous one. Unfortunately, for some important cases (like wave propagation in shale and “shale gas” recognition) this model fails. For such cases, the anisotropic and heterogeneous model have to be used.
Full wave field modeling in anisotropic media, which is regarded as a sufficient approximation of the real medium (Tilted Transverse Isotropy [1] is complicated and requires a great amount of computational power, especially for three-dimensional media). Thus, the equation governing seismic wave propagation is frequently solved in much simpler media: two-dimensional or isotropic. They are used, for example, to generate models of p-wave velocity spatial distribution for depth migration. When the kinematic effects of anisotropy are present in the field data, three-dimensional modeling is required and a two-dimensional model is not adequate. Such simplification of a geological medium can be the source of mistakes, especially in the interpretation process.
Another simplification widely used in wave field modeling is using the acoustic equation [2]. The generation of fully quantitative and high-resolution models of the reservoir’s physical properties must take into account the changes in wave energy by including its amplitude values in the modeling. To incorporate them correctly, the 3D, rather than 2D models are appropriate as well as p-wave anisotropy, attenuation, density and elastic effects. As it was shown in [2] “The effects of moving from 2D to 3D are much larger than those of moving from acoustic to elastic modeling, and it is not sensible to use elastic inversion in 3D unless it also correctly incorporates at least the kinematics of p-wave anisotropy”. Only full wave field modeling in a three-dimensional heterogeneous anisotropic medium, in spite of its computational cost, can bridge the gap between theory and the complexities observed in field seismic data. Thus, the conclusion can be drawn that it is not advisable to use elastic inversion in 2D models.
The effectiveness of modeling algorithms is crucial for inversion and migration processes. A general outline of the various methods and applications can be found in [3]. Numerical modeling is a way of solving wave equations if it is not possible to find an analytical solution. Seismic wave propagation is complex due to heterogeneities in the geometry and physical properties of geological media. Highly non-uniform stress distribution around cracks, voids or excavations further complicates the phenomena that occur during seismic wave propagation. P-waves are converted to S-waves, and reflection and refraction waves are created at the excavation boundary. Moreover, in the presence of anisotropy, all these waves propagate with a speed that is dependent on the direction of propagation. In recent years, many numerical modeling algorithms have been proposed and subsequently implemented in commercial or open-source software; however, to solve the wave equation in a geological medium with specific geometry and distribution of physical properties, advanced numerical algorithms have to be adopted.
Many numerical methods of modeling seismic wave propagation are available. The staggered grid finite difference method is an example of a grid-based method that was implemented by [4,5] for modeling the seismic wave propagation and ground motion caused by earthquakes. Staggered grid operators are more accurate than central difference operators. The staggered grid formulation allows easy implementation of the planar free surface boundary and can be easily extended to high-order spatial difference operators and implemented on scalar, vector or parallel computers [6]. Moreover, better system conditions and convergence are obtained for staggered grid operators.
The stability and grid dispersion of the different staggered grid finite difference schemes have been analyzed in detail in many works [6,7,8] in which tests were conducted for different orders of staggered grids, for both the standard and rotated versions of the staggered grid, in two and three dimensions, with and without anisotropy of the geological medium. It has been shown that dispersion errors in the general anisotropic case depend not only on the order of the finite difference operator used, but also on the symmetry of the anisotropic medium, the degree of anisotropy and the orientation of the symmetry axis with respect to the coordination axis. There are dependencies that restrict the dispersion error depending on the finite difference order for acoustic wave equation (e.g., a second order system requires a minimum sampling of 10 grid points per wavelet [4], whereas a fourth order system needs only 5 [5]). However, the stability and grid dispersion of staggered grid finite difference schemes in the presence of specific boundary conditions and medium heterogeneity are much more difficult to analyze and are often verified by numerical tests [9]. Experiments conducted by [10] show that in homogeneous media with strong triclinic anisotropy, but with a magnitude several times lower than is found in nature, no less than four grid points per wavelet are required by the eighth order staggered grid finite difference method to avoid dispersion. Also, eighth order schemes have been used to model wave propagation in fractured anisotropic media [11].
In this article, because all these topics are adequately covered in the articles mentioned above, we do not provide a detailed analysis of the staggered grid finite difference implementation, its stability, or issues regarding numerical grid dispersion. We do, however, discuss some ideas related to the memory requirements that must be met for the numerical modeling of wave propagation in a three-dimensional medium. The results presented in this article were obtained for the numerical modeling of wave propagation both in a full anisotropic medium and in a simplified medium with only elastic or acoustic properties.
This paper begins with a short description of the equation of motion, which describes waveforms in acoustic, elastic and anisotropic mediums. This is followed by a brief outline of the staggered grid finite difference method used for the numerical modeling of wave propagation. The results of the estimation of memory requirements for all three types of wave motion are presented. In the last section, the conclusion and resulting recommendations are presented.

2. Materials and Methods

2.1. The Equation of Motion—Modeling Algorithms

The Equation of motion for a perfectly elastic medium can be written as:
ρ(x)ü (x,t) − ∇ · σ(x,t) = f (x,t)
where ρ is density, ü (x,t) is a second derivative of the displacement vector, σ x , t is a stress tensor and f x , t is external force densities, x = [x,y,z]. After applying the relationship between stress and strain (2) to the Equation (1) written in Voight notation we obtain the general form of the equation of motion in an anisotropic medium
σ i = C i j ε j
where σi and εj are stress and the strain vector respectively and Cij is a stiffness tensor.
Elastic wave propagation in an anisotropic medium is calculated by solving the system of nine first-order partial differential equations at each time step. In an elastic homogenous medium, the number of equations remains the same, but the number of material constants describing the medium is reduced to only two Lames parameters. In an acoustic medium where the shear modulus is equal to zero, the wave equation is described by a system of only four first-order partial differential equations [12].

2.2. Numerical Modeling of the Equation of Motion

There are many numerical methods for solving the Equation of motion (1). The most general of them are grid-based techniques in which the wave field is tracked in the dense grid points, for example, the finite difference, the finite element and pseudospectral methods [4]. Due to this simplicity and ease of use for simple geometries, the finite difference method is the most popular method for solving partial differential equations. It is also one of the oldest methods, dating back to the 18th century. A common feature of finite difference methods is the approximation of the differential operator by replacing the derivatives in the equation using differential quotients. Due to the complexity of the medium in which the wave propagates and the required accuracy of the solution, different orders of the finite differences are applied. (e.g., [4,5,11,12]).
Various algorithms have been developed to implement finite difference methods, starting from the classical finite difference method based on the second-order displacement formulation of the elastic or acoustic wave equation, to the staggered grid finite difference method in the simple and rotated version. The classical finite difference numerical schemes are strongly dependent on Poisson’s ratio; therefore, they cannot be used for modeling in the offshore regions or any onshore regions composed of materials with a high Poisson’s ratio. Staggered grid methods solve this drawback [13]. The staggered grids are used in many computational problems such as classic electrodynamics problems [14] or problems of fluid dynamics [15]. Simulation of the elastic wave propagation in isotropic media using the staggered grid method was first carried out by Madariaga [16]. The most popular and commonly used staggered grid scheme to solve 2D elastic wave propagation in isotropic media was proposed by Vierieux [2]. The solution was later extended from second-order approximation in space to fourth order [3] and then adopted to more realistic—vertical transverse isotropic (VTI) media [17] and to the three-dimensional medium [4]. The same standard staggered grid configuration was used in all schemes. Moreover, the same standard staggered grid structure can be applied to model anisotropy up to orthotropic media, which is of the greatest importance for geophysical problems and hydrocarbon exploration. Wave field propagation in regions where anisotropy level is greater than orthotropic requires the use of different types of staggered grid configuration: rotated staggered grid [18] or partially staggered grid configurations [19,20]. The staggered grid method can also be used for dual-tetrahedron or hexagon-pentagon mesh [21]. A drawback of all staggered grid schemes is that they need more computer memory than classical finite difference schemes based on the second-order displacement formulation of the elastic or acoustic wave equations. Numerical modeling carried out using the staggered grid finite-difference method adopts separate computational grids for the stress and velocity components (when an elastic wave equation is modeled) or for pressure and velocity components in the acoustic version of the wave equation. The separation of the computational grid improves the accuracy and convergence of the method but using a staggered grid implies a more complicated distribution of the calculated discrete values of velocity and stress or pressure values.
The detailed layouts of the computational grids assumed in the modeling of wave propagation in anisotropic, elastic and acoustic mediums are shown in Figure 1, Figure 2 and Figure 3, respectively. The numbers indicating the arrangement of the components in the computational grid were retained for all three figures.
In Figure 1, the location of the stiffness components cij, i = 1, 2, 3, j > 3 and c45, c46 and c56 are not marked. The implementation of the staggered grid results in that the off-diagonal stress and strain components are not defined in the same place. According to Hook’s law evaluating the stress-strain relationship implies summation over a linear combination of the elements of both the elasticity tensor and the elements of the strain tensor. Hence, the term values of these stress components have to be interpolated to the locations where the diagonal components are defined [8]. Although in the staggered grid formulation applied for an elastic wave equation (Figure 2) the distribution of material constants and the velocity and stress values in the basic computation cell are set in seven different locations, only the location of three parameters (c33, c44 and ρ), not ten (c11, c12, c13, c23, c22, c23, c44, c55, c66, ρ), as for an anisotropic medium, must be determined during the calculation. Further simplification of the calculation model (Figure 3) reduces the number of parameters to just one value (ρ) and limits the calculation grids only to four locations.
Stability conditions and the need to limit numerical dispersion determine the size of the computational grids of the staggered grid finite difference method and the number of time steps necessary to carry out wave field modeling.
Although there are dependencies that restrict the dispersion error for the acoustic wave equation, the grid dispersion of staggered grid finite difference schemes in an anisotropic medium is often verified by numerical tests [7]. The need to reduce the dispersion error and maintain stability affects the size of the model and may prevent the performing of numerical calculations due to hardware limitations. In the next section, we present the impact of the parameters adopted in wave equation modeling on the amount of memory needed to perform calculations for anisotropic, elastic and acoustic mediums.

3. Results of the Memory Workload Calculations

3.1. Wave Propagation Example.

In this section, the differences between staggered grid finite difference modelings conducted for acoustic and elastic wave equations in isotropic and anisotropic media, using the true Marmousi model (Figure 4), are presented [22]. The Marmousi model has been used for decades as a benchmark for various methods of velocity model estimation, as an indicator of the performance and efficiency of numerical procedures related to wave field modeling [23]. The original Marmousi model is a two-dimensional acoustic model that contains many reflectors, with characteristic steep dips. Its other features are strong velocity gradients in both lateral and vertical directions. This model was extended to an elastic version [24] and adopted to simulate wave propagation in anisotropic media according to [25].
Results of the seismic wave simulation for acoustic and elastic wave equations and the simulation in anisotropic media are presented in Figure 5, Figure 6 and Figure 7. A double couple seismic source with a Ricker wavelet with a dominant frequency of 10 Hz was assumed in all simulations.
In all the snapshots, the vertical component of wave field (vx) computed after a 1 s propagation of the seismic wave is presented. To highlight the differences between wave propagation in acoustic, elastic and anisotropic cases, visualization was carried out using a normalization (to the range −1 and 1) and enlacement (Equation (3)) of the computed values.
V = v x log 10 v x 2
There are significant differences between the results of simulations carried out using simplified, acoustic equations and simulation undertaken in an anisotropic medium. Along with the increase in the accuracy of modeling, the requirements related to the memory and resources of the computing environment increase. However, such detailed information is not needed in every seismic preprocessing or postprocessing method where the results of wave field modelings are used. The next section presents the results of the calculation of the computing environment memory requirements depending on the adopted seismic wave propagation model and restrictions imposed by the adopted numerical scheme.

3.2. Spatial Modeling Step Dependence

Calculations of the memory requirements for the modeling of the wave propagation using a staggered grid finite difference scheme were conducted for a 1 km × 1 km × 1 km three-dimensional model. For this model, we calculated the amount of memory needed to solve the equation of motion separately in anisotropic, elastic and acoustic media. An explicit eighth order in space and second order in time staggered grid finite difference scheme was adopted. The calculations were performed for changing values of the number of grid points per wavelength and the value of the maximum frequencies of the source wavelet. The number of grid points per wavelength of the slowest wave that restricts the dispersion error depends on the order of the finite-difference scheme and the type of medium in which the wave propagation is modeled. There are known dependencies that determine the minimum number of computational nodes to maintain the stability of the numerical scheme. Based on these relationships, the minimum amount was calculated. However, it is also known that the more computational nodes are taken per wavelength, the better accuracy of wave field modeling can be obtained. Based on these assumptions, calculations of the memory dependencies of the staggered grid were made for the number of grid points per wavelet ranging from the minimal value to a value twice as high—5 to 10 grid points per wavelength. The maximum frequency of the source wavelet was the second parameter that was changed during calculations of memory requirements of the staggered grid method of seismic wave propagation. The value of the maximal frequency of the source wavelet is varied within two ranges: low and high. Low frequencies are used in seismology, for example, source mechanism simulation. In this article, we assume that low frequency is changed in the range of 5 to 10 Hz. Calculation of the memory dependency of the staggered grid was conducted also for high frequencies, which are typical of oil, gas and coal exploration [26]. Calculations of memory dependencies on the value of maximal source frequency typical for seismic surveys vary in this article in the range of 100 Hz to 200 Hz.
We performed our calculations by twice changing the number of nodes per wavelet, the source frequency and the velocity of the seismic waves. We started from the initial parameter values and increased each of them by 10% of the initial value at each computational step. The parameter values that determine the amount of memory needed to model wave propagation using the staggered grid method are summarized in Table 1.
Results of estimation of spatial modeling step dependencies obtained for low frequency (changing within range 5–10 Hz) and high frequency (changing within range 100–200 Hz) of the source wavelet are presented in Figure 8, Figure 9 and Figure 10. The estimation of the relationships between a spatial modeling step and dependent variables was performed separately for all the outcome variables. In Figure 8, dependency of the spatial modeling step on the number of nodes per wavelength was presented. Figure 9 and Figure 10 present the same dependency on source frequency and the velocity of the slowest wave respectively.
The presented relationships for all analyzed mediums have the same power dependencies; these decrease for the number of nodes and frequencies and increase for the velocity of the slowest wave. Regardless of the complexity of the assumed computational medium (acoustic elastic or isotropic one), the same model relationships between a spatial modeling step and dependent variables were obtained. Obtained relationships reflect the formulae that limit the dispersion in wave field modeling in Equation (4).
d h v min f max N λ
Although change in the spatial modeling step can be calculated using a simple relationship, the memory workout dependence, presented in the next chapter, is more complex.

3.3. Memory Workout Dependence

Results of estimation of memory workout dependencies obtained for low and high frequencies of the source wavelet are presented in Figure 11, Figure 12 and Figure 13. As in the previous subsection, the estimation of the relationships between a memory workout and dependent variables were performed separately for all the outcome variables. In Figure 11, the dependency of the memory workout on the number of nodes per wavelength is presented. Figure 12 and Figure 13 present the same dependency on source frequency and the velocity of the slowest wave respectively.
There are significant differences between the memory workout relationships calculated for acoustic, elastic and anisotropic media. The differences between the memory workout increase with an increasing number of nodes per wavelength (Figure 11) and frequency of the source function (Figure 12). The differences between memory workload and velocity of the slowest wave are significant only for high frequencies of the source function (Figure 13b) and remain the same for the low frequency of the source function (Figure 13a).
Estimation of the memory and computational load starts with the calculation of the spatial modeling step which is calculated according to Equation (4).
The maximum frequency fmax is obtained from the source signals. For synthetic wavelets such as the Ricker function, it can be calculated analytically and for field data, it can be obtained from Fourier’s analysis. The number of nodes per wavelength N is a meta parameter that can be used to tune the modeling stability. Finally, the minimal wave velocity vmin is either given directly or determined from the stiffness tensor components using the Christoffel Equation (5) [27].
i j M i j ρ v 2 δ i j s j = 0   M i j = m n q m c i m n j q n
In the equation, ρ denotes density, v the phase velocity, δij the Kronecker symbol and sj the polarization. The Mij is referred to as the Christoffel matrix and it combines the direction vector qi with the stiffness tensor cijkl. To obtain the vmin the equation is solved at every model location for several directions of qi. For each direction, the equation is solved as an eigenvalue problem giving three velocities. The velocity is then minimized among eigenvalues, directions and finally, the entire model.
The implementation of the boundary conditions requires adding more computational nodes, Nface, on model faces. The number of nodes (N) can be estimated using Equation (6).
N = 1 + X / d x + N l e f t + N r i g h t 1 + Y / d y + N f r o n t + N b a c k 1 + Z / d z + N u p + N d o w n
The number of nodes N multiplied by the size of a node (e.g., four bytes for single-precision floating-point type) gives the allocation size for a single component of the wave equation or model parameter. The overall number of allocations depends on the wave equation type (acoustic, elastic or anisotropic) and the dimensionality (2D vs 3D). For the anisotropic equation, it also heavily depends on the type of anisotropy and the handling of the dependent stiffness tensor components (precalculated or calculated on the fly).
As an example, the numerical modeling from the Figure 8 was performed with Nλ = 5, fmax = 10 Hz and vmin = 315.6 m/s, which was obtained from the Christoffel Equation. After substituting into Equation (4), it gives dh = 6.3 m. The size of the model is 9200 m by 2930 m and the boundary conditions’ nodes are set to Nleft = Nright = Ndown = 100 for the damping condition and Nup = 4 for the free surface. Equation (6) gives N = 945,630 nodes that require 3,782,520 B of memory.
There are six unique stiffness tensor components for the VTI modeling c11, c12, c13, c33, c44, c66 and one additional allocation for density ρ. The wavefield components include stress τxx, τxy, τxz, τyy, τyz, τzz and velocity vx, vy, vz. In the case of 2D modeling, some components disappear, leaving nine allocations that require 34.0 MB of memory in total. In the elastic and acoustic modeling, the reduced memory usage is caused by further simplification of the stiffness tensor and stress components. If we were to increase, for example, the fmax or Nλ two, four and eight times, the used memory would increase approximately 3.4 times, 12.5 times and 47.9 times (showing nonlinear dependency).

3.4. Execution Time Calculation

The size of the allocated memory can be used to estimate the lower bound of modeling time. The fundamental property of stencil computations is their memory bandwidth bound nature. It implies that numerical calculations finish much faster than memory operations, and the memory latency cannot be hidden. It is also the case for the modern GPGPU accelerators with large memory bandwidth. For example, GTX 1080 Ti achieves sustained transfers of around 360 GB/s between multiprocessors and global memory, but it is still a performance-limiting factor.
Furthermore, the stencil computations (nearest-neighbor) can be characterized by a high degree of redundant memory access [28]. For example, in the eight-order scheme in 3D space, 25 points are required to produce a single output. In the idealized case, there would be only one location accessed for each output value. In some instances, when the model is small, the redundancy can be hidden by the cache. However, the cache is usually too small to optimize memory access for large 3D models.
The go-to technique for minimizing the redundancy is spatial blocking. The approach employed in the custom application used to produce Figure 6, Figure 7 and Figure 8 is based on the use of shared memory and registers [28]. By keeping the data close to the chip, the global memory access redundancy can be significantly reduced.
When spatial blocking is implemented, we can assume that the amount of transferred data in each computational step is close to the optimum (only a small percent of data transfers in the helo regions will be redundant). Going back to the example of wave propagation in anisotropic 2D Marmousi velocity model, for the stress kernel, there are nine reads c 11 , c 13 , c 33 , c 55 , v x , v z , τ x x , τ z z , τ x z and three writes τ x x , τ z z , τ x z . For the velocity kernel: six reads ρ , τ x x , τ z z , τ x z , v x , v z and two writes v x , v z . The stress kernel processes a total of 45.4   MB   of data and the velocity kernel 30.3   MB . Assuming the 360   GB / s bandwidth, this gives 126.1   μ s bound for the stress and 84.1   μ s for the velocity kernel.
To get the total execution time of the simulation, we have to determine the time step duration and the total number of steps. In this case, the time step d t = 349.7   μ s and the total number of steps N T = 2860 is calculated based on the dependencies described in [29]. The final equation for calculating the lower bound of execution time T can be expressed as:
T N T S k R k + W k B
where the S denotes the single allocation size in bytes, R k stands for the number of reads in a given kernel k , W k for the number of writes per kernel and the B is the total available bandwidth in bytes per second.
Similar to the memory workload calculations and execution time calculations for acoustic, elastic and anisotropic wave propagations using the Marmousi velocity model were performed. The results obtained for low and high frequencies of the source wavelet are presented in Figure 14, Figure 15 and Figure 16.
Differences among computational time for the elastic and anisotropic model are rising slowly with the increase in both the number of nodes per wavelength and the source wavelet frequency. The computational time of acoustic wave modeling grows much slower with the increase in the number of nodes and source frequency compared to elastic and anisotropic modeling (Figure 14 and Figure 15). In Figure 16, the execution time calculated when switching from a 2D to a 3D model is presented. The computation time shown in Figure 16b is in hours, while the remaining figures are in seconds.

4. Discussion

In this article, we present the memory requirements which must be satisfied when performing numerical wave field modeling based on the staggered grid finite difference method. The amount of allocated memory directly affects the computational time, which is especially important in modern GPGPU implementations. The results presented in this article were obtained for three different media: acoustic, elastic and anisotropic. The differences between each type of media are evident in the amount of memory needed to perform numerical modeling (Figure 8, Figure 9 and Figure 10) and execution time, especially for the 3D media (Figure 14, Figure 15 and Figure 16). These differences are caused by the different number of parameters (Figure 1, Figure 2 and Figure 3) that have to be taken into account during numerical modeling of wave propagation through each medium and transfers between graphical card multiprocessors and computer global memory.
The numerical modeling of acoustic or fully elastic waves is an important tool used in various fields of science and engineering. It is used for monitoring the artificial reservoir, to understand its internal structures and the dynamic behavior of rock mass. Solving such problems often requires multiple calculations of seismic wave propagation, especially when Monte Carlo methods are utilized. In these type of problems, an important issue is also the time-consuming calculations and the requirements for the memory used in numerical modelings. The issue of time-consuming calculations is widely discussed in the literature. There are solutions, such as the use of a parallel computing environment or graphics card processors that reduce computational time significantly. The issue of memory usage, especially for GPU solutions, is a key aspect that must be considered when planning seismic modeling. The relationships presented in this work may help in planning seismic wave modeling. They can also take into account the complexity (or simplification) of the modeled phenomena so that the results of numerical simulations are obtained in a satisfactory timeframe and form. The results presented in this article include only memory workload and execution time associated with initial and boundary conditions. The memory requirements are further increased by intermediate modeling results such as seismograms or snapshots. Understanding numerical modeling performance is especially important when intensive calculation of three-dimensional seismic wave field modeling is performed using graphical processing units.
The execution time calculations presented in this article can be useful as an optimization target when tuning kernels in GPU calculations. The execution time of the wave field modeling is also dependent on bandwidth requirements. The use of temporal blocking to merge several consecutive time steps into one operation in order to reduce the execution time will be performed in future works.

Author Contributions

Conceptualization, A.L. and A.F.; methodology, A.F.; software, D.G.; validation, A.F. and D.G.; formal analysis, A.F. and D.G.; investigation, A.F.; resources, D.G.; data curation, D.G.; writing—original draft preparation, A.F.; writing—review and editing, D.G. and A.L.; visualization, A.F. and D.G.; supervision, A.F.; project administration, A.F.; funding acquisition, A.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

This work was supported by the AGH—University of Science and Technology, Faculty of Geology, Geophysics and Environmental Protection, as part of a statutory project.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Postma, G. Wave propagation in stratified medium. Geophysics 1955, 20, 780–806. [Google Scholar] [CrossRef]
  2. Warner, M.; Morgan, L.; Umpleby, A.; Stekl, I.; Guasch, L. Which physics for full-wavefield seismic inversion? In Proceedings of the 74th EAGE Conference & Exhibition Incorporating SPE EUROPEC, Copenhagen, Denmark, 4–7 June 2012. [Google Scholar]
  3. Carcione, J.M.; Herman, G.C.; Kroode, A.P.E. Seismic modeling. Geophysics 2002, 67, 1304–1325. [Google Scholar] [CrossRef]
  4. Virieux, J. Velocity-stress finite-difference method. Geophysics 1986, 51, 889–901. [Google Scholar] [CrossRef]
  5. Levander, A.R. Fourth-order finite-difference P-SV seismograms. Geophysics 1988, 53, 1425–1436. [Google Scholar] [CrossRef]
  6. Graves, R.W. Simulating seismic wave propagation in 3D elastic media using staggered-grid finite differences. Bull. Seismol. Soc. Am. 1996, 86, 1091–1106. [Google Scholar]
  7. Moczo, P.; Kristek, J.; Halada, L. 3D 4th-order staggered-grid finite-difference schemes: Stability and grid dispersion. Bull. Seismol. Soc. Am. 2000, 90, 587–603. [Google Scholar] [CrossRef]
  8. Saenger, H.; Bohlen, T. Finite-difference modeling of viscoelastic and anisotropic wave propagation using the rotated staggered grid. Geophysics 2004, 69, 583–591. [Google Scholar] [CrossRef]
  9. Moczo, P.; Kristek, J.; Vavryčuk, V.; Archuleta, R.J.; Halada, L. 3D heterogeneous staggered-grid finite-difference modeling of seismic motion with volume harmonic and arithmetic averaging of elastic moduli and densities. Bull. Seismol. Soc. Am. 2002, 92, 3042–3066. [Google Scholar] [CrossRef]
  10. Igel, H.; Mora, P.; Riollet, B. Anisotropic wave propagation through finite-difference grids. Geophysics 1995, 60, 1203–1216. [Google Scholar] [CrossRef]
  11. Ramos-Martinez, J.; Ortega, A.A.; McMechan, G.A. 3-D seismic modeling for cracked media: Shear wave splitting at zero-offset. Geophysics 2000, 65, 211–221. [Google Scholar] [CrossRef]
  12. Operto, S.; Virieux, J.; Amestoy, P.; L’Éxcellent, J.-Y.; Giraud, L.; Ben Hadj Ali, H. 3D finite-difference frequency-domain modeling of viscoacoustic wave propagation using a massively parallel direct solver: A feasibility study. Geophysics 2007, 72, SM195–SM211. [Google Scholar] [CrossRef] [Green Version]
  13. Bartolo, L.D.; Dors, C.; Mansur, W.J. Theory of equivalent staggered-grid schemes: Application to rotated and standard grids in anisotropic media. Geophys. Prospect. 2015, 63, 1097–1125. [Google Scholar] [CrossRef]
  14. Harlow, F.; Welch, J. Numerical calculation of timedependent viscous incompressible flow of fluid with free surface. Phys. Fluids 1965, 8, 2182–2189. [Google Scholar] [CrossRef]
  15. Madariaga, R. Dynamics of an expanding circular fault. Bull. Seismol. Soc. Am. 1976, 66, 639–666. [Google Scholar]
  16. Yee, K. Numerical solution of inital boundary value problems involving Maxwell’s equations in isotropic media. IEEE Trans. Antennas Propag. 1966, 14, 302–307. [Google Scholar]
  17. Faria, E.L.; Stoffa, P.L. Finite-difference modeling in transversely isotropic media. Geophysics 1994, 59, 282–289. [Google Scholar] [CrossRef]
  18. Saenger, E.H.; Gold, N.; Shapiro, S.A. Modeling the propagation of elastic waves using a modified finite-difference grid. Wave Motion 2000, 31, 77–92. [Google Scholar] [CrossRef]
  19. Andrews, D. A numerical study of tectonic stress release by underground explosions. Bull. Seismol. Soc. Am. 1973, 63, 1375–1391. [Google Scholar]
  20. Magnier, S.; Mora, P.; Tarantola, A. Finite differences on minimal grids. Geophysics 1994, 59, 1435–1443. [Google Scholar] [CrossRef]
  21. Liu, Y. Fourier analysis of numerical algorithms for the Maxwell’s equations. J. Comput. Phys. 1996, 124, 396–416. [Google Scholar] [CrossRef]
  22. Brougois, A.; Bourget, M.; Lailly, P.; Poulet, M.; Ricarte, P.; Versteeg, R. Marmousi, model and data. In Proceedings of the EAEG Workshop-Practical Aspects of Seismic Data Inversion, Copenhagen, Denmark, 28 May–1 June 1990; European Association of Geoscientists & Engineers: Houten, The Netherlands, 1990. [Google Scholar]
  23. Versteeg, R. The marmousi experience: Velocity model determination on a synthetic complex data set. Lead Edge 1994, 13, 927936. [Google Scholar] [CrossRef]
  24. Martin, G.S.; Wiley, R.; Marfurt, K.J. Marmousi2: An elastic upgrade for marmousi. Lead Edge 2006, 25, 156166. [Google Scholar] [CrossRef]
  25. Yan, J.; Sava, P. Elastic wave-mode separation for VTI media. Geophysics 2009, 74, WB19–WB32. [Google Scholar] [CrossRef] [Green Version]
  26. Sun, W.; Zhou, B.; Hatherly, P.; Fu, L.Y. Seismic wave propagation through surface basalts—Implications for coal seismic surveys. Explor. Geophys. 2010, 41, 1–8. [Google Scholar] [CrossRef] [Green Version]
  27. Jaeken, J.W.; Cottenier, S. Solving the Christoffel equation: Phase and group velocities. Comput. Phys. Commun. 2016, 207, 445–451. [Google Scholar] [CrossRef] [Green Version]
  28. Micikevicius, P. 3D finite difference computation on GPU-s using CUDA. In Proceedings of the 2nd Workshop on General-Purpose Processing on Graphics Processing Units, Washington, DC, USA, 8 March 2009; pp. 79–84. [Google Scholar]
  29. Pei, Z.; Fu, L.-Y.; Sun, W.; Jiang, T.; Zhou, B. Anisotropic finite-difference algorithm for modelling elastic wave propagation in fractured coalbeds. Geophysics 2012, 77, C13–C26. [Google Scholar] [CrossRef]
Figure 1. The layout of a finite difference grid cell for the staggered grid formulation in anisotropic media. The node numbers define the locations of stress components, elastic parameters and density.
Figure 1. The layout of a finite difference grid cell for the staggered grid formulation in anisotropic media. The node numbers define the locations of stress components, elastic parameters and density.
Energies 13 04573 g001
Figure 2. The layout of a finite difference grid cell for the staggered grid formulation in elastic media. The node numbers define the locations of stress components, elastic parameters and density.
Figure 2. The layout of a finite difference grid cell for the staggered grid formulation in elastic media. The node numbers define the locations of stress components, elastic parameters and density.
Energies 13 04573 g002
Figure 3. The layout of a finite difference grid cell for the staggered grid formulation in acoustic media. The node numbers define the locations of stress components, elastic parameters and density.
Figure 3. The layout of a finite difference grid cell for the staggered grid formulation in acoustic media. The node numbers define the locations of stress components, elastic parameters and density.
Energies 13 04573 g003
Figure 4. Two-dimensional Marmousi model. The yellow cross indicates the location of the source of the seismic wave used in the numerical simulation, the vertical scale indicates layer densities.
Figure 4. Two-dimensional Marmousi model. The yellow cross indicates the location of the source of the seismic wave used in the numerical simulation, the vertical scale indicates layer densities.
Energies 13 04573 g004
Figure 5. Snapshots computed after a 1 s simulation of the acoustic wave equation in the two-dimensional Marmousi model. The visualization was made using the normalized and enhanced value of component vx.
Figure 5. Snapshots computed after a 1 s simulation of the acoustic wave equation in the two-dimensional Marmousi model. The visualization was made using the normalized and enhanced value of component vx.
Energies 13 04573 g005
Figure 6. Snapshots computed after a 1 s simulation of the elastic wave equation in the two-dimensional Marmousi model. The visualization was made using the normalized and enhanced value of component vx.
Figure 6. Snapshots computed after a 1 s simulation of the elastic wave equation in the two-dimensional Marmousi model. The visualization was made using the normalized and enhanced value of component vx.
Energies 13 04573 g006
Figure 7. Snapshots computed after a 1 s simulation of the wave equation in the anisotropic two-dimensional Marmousi model. The visualization was made using the normalized and enhanced value of component vx.
Figure 7. Snapshots computed after a 1 s simulation of the wave equation in the anisotropic two-dimensional Marmousi model. The visualization was made using the normalized and enhanced value of component vx.
Energies 13 04573 g007
Figure 8. The relationships between the distance to the nearest calculation nodes dh (the largest allowable spatial modeling step) and the number of nodes per wavelength for: low (a) and high (b) frequencies of the source wavelet.
Figure 8. The relationships between the distance to the nearest calculation nodes dh (the largest allowable spatial modeling step) and the number of nodes per wavelength for: low (a) and high (b) frequencies of the source wavelet.
Energies 13 04573 g008
Figure 9. The relationships between the distance to the nearest calculation nodes dh (the largest allowable spatial modeling step) and source frequency for: low (a) and high (b) frequencies of the source wavelet.
Figure 9. The relationships between the distance to the nearest calculation nodes dh (the largest allowable spatial modeling step) and source frequency for: low (a) and high (b) frequencies of the source wavelet.
Energies 13 04573 g009
Figure 10. The relationships between the distance to the nearest calculation nodes dh (the largest allowable spatial modeling step) and velocity of the slowest wave for: low (a) and high (b) frequencies of the source wavelet.
Figure 10. The relationships between the distance to the nearest calculation nodes dh (the largest allowable spatial modeling step) and velocity of the slowest wave for: low (a) and high (b) frequencies of the source wavelet.
Energies 13 04573 g010
Figure 11. The relationships between the memory workout and the number of nodes per wavelength for: low (a) and high (b) frequencies of the source wavelet.
Figure 11. The relationships between the memory workout and the number of nodes per wavelength for: low (a) and high (b) frequencies of the source wavelet.
Energies 13 04573 g011
Figure 12. The relationships between the memory workout and source frequency for: low (a) and high (b) frequencies of the source wavelet.
Figure 12. The relationships between the memory workout and source frequency for: low (a) and high (b) frequencies of the source wavelet.
Energies 13 04573 g012
Figure 13. The relationships between the memory workout and velocity of the slowest wave for: low (a) and high (b) frequencies of the source wavelet.
Figure 13. The relationships between the memory workout and velocity of the slowest wave for: low (a) and high (b) frequencies of the source wavelet.
Energies 13 04573 g013
Figure 14. The execution time computed for different numbers of nodes per wavelength for: low (a) and high (b) frequencies of the source wavelet.
Figure 14. The execution time computed for different numbers of nodes per wavelength for: low (a) and high (b) frequencies of the source wavelet.
Energies 13 04573 g014
Figure 15. The execution time computed for different source frequency for: low (a) and high (b) frequencies of the source wavelet.
Figure 15. The execution time computed for different source frequency for: low (a) and high (b) frequencies of the source wavelet.
Energies 13 04573 g015
Figure 16. The execution time computed for different model sizes for: low (a) and high (b) frequencies of the source wavelet.
Figure 16. The execution time computed for different model sizes for: low (a) and high (b) frequencies of the source wavelet.
Energies 13 04573 g016
Table 1. The values of the parameters used in the memory requirements calculations.
Table 1. The values of the parameters used in the memory requirements calculations.
ParameterInitial Value
Nodes per wavelength5 nodes
Source frequency5 Hz and 100 Hz
Minimal seismic wave velocity 3900 m/s

Share and Cite

MDPI and ACS Style

Franczyk, A.; Gwiżdż, D.; Leśniak, A. Performance of 3D Wave Field Modeling Using the Staggered Grid Finite Difference Method with General-Purpose Processors. Energies 2020, 13, 4573. https://doi.org/10.3390/en13174573

AMA Style

Franczyk A, Gwiżdż D, Leśniak A. Performance of 3D Wave Field Modeling Using the Staggered Grid Finite Difference Method with General-Purpose Processors. Energies. 2020; 13(17):4573. https://doi.org/10.3390/en13174573

Chicago/Turabian Style

Franczyk, Anna, Damian Gwiżdż, and Andrzej Leśniak. 2020. "Performance of 3D Wave Field Modeling Using the Staggered Grid Finite Difference Method with General-Purpose Processors" Energies 13, no. 17: 4573. https://doi.org/10.3390/en13174573

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop