1. Introduction
The numerical simulation of solid mechanics problems involving history-dependent materials and large deformations has historically represented one of the most important topics in computational mechanics. In the framework of discrete mechanics, the Discrete Element Method (DEM) [
1] is a popular technique in geotechnical engineering [
2] for its advantages in handling large deformation and failure problems and for its relatively easy algorithm and implementation. However, it is well known that time-consuming procedures should be followed to calibrate the material parameters, and it is still extremely hard to handle real scale problems for the huge computational effort required. Within the continuum mechanics framework, a higher number of numerical techniques has been tested, and their effectiveness in the context of cohesive-frictional materials, which undergo large deformations, have been assessed. In this regard, an important contribution is represented by the works of Cervera and coworkers [
3,
4,
5,
6], where the Finite Element Method (FEM) is employed to study strain localization in cohesive-frictional materials. However, their work is limited to the case of infinitesimal strains and does not consider the large deformation regime [
7]. The Lagrangian viewpoint presents, in this context, a rather obligatory choice, since the adoption of such a framework greatly simplifies the constitutive modeling and the tracking of the entire deformation process. In the case of mesh-based methods, the natural limitation of the Lagrangian approach is related to the deformation of the underlying discretisation, which tends to get tangled as the deformation increases. Aggressive remeshing techniques have proven to be capable of further extending the realm of applicability of Lagrangian approaches, effectively extending the limits of the approach well beyond its original boundaries. For instance, the Arbitrary Lagrangian-Eulerian method (ALE) [
8] has been developed in an attempt to overcome the limitation of the Total Lagrangian (TL) and Updated Lagrangian (UL) techniques when severe mesh distortion occurs. Many studies on the application of the ALE method can be found in the literatrue such as, for example, the numerical modeling of friction stir welding [
9] or the simulation of geotechnical problems [
10]. Another Lagrangian technique, which exploits such remeshing procedures, is represented by the particle finite element method, first developed for the simulation of free surface flows and breaking waves [
11,
12] and then successfully adapted for structural mechanics problems involving large deformations [
13,
14,
15] or for the simulation of viscoplastic materials [
16,
17,
18,
19,
20,
21] and geomaterials [
22,
23]. Although the method has broad capabilities, some disadvantages come from the use of remeshing procedures, such as, issues in the parallelization of the code, in the conservation of the mass and difficulties related to the storage of historical variables [
24,
25,
26]. Among the meshfree techniques, it is worth mentioning the Smooth Particle Hydrodynamics method (SPH) [
27,
28], a Lagrangian meshfree method, initially designed for hydrodynamics problems [
29,
30]. Recently, this method has been successfully applied to large deformation problems of geomaterials [
31,
32,
33,
34]. However, some disadvantages come from its meshfree natrue, such as the impossibility to directly include boundary conditions in the SPH formalism, penetration problems between continua when high speed or impact occurs and the introduction of artificial viscosity to avoid unstable solutions.
Alternative Lagrangian techniques proposed in the literatrue, which solve the drawbacks listed before, are represented by the Material Point Method (MPM) [
35,
36] and the Galerkin Meshless Method (GMM) [
37], which are the objects of study of the current work. The MPM is a particle-based method, extensively used for geotechnical problems [
38,
39,
40] for its capabilities in tracking extremely large deformation while preserving material properties of the material points. Most MPM codes make use of explicit time integrations, due to the ease of the formulation and implementation. However, some implicit versions of MPM can be found in the literatrue. For instance, Guilkey [
41] exploits the similarities between MPM and FEM in an implicit solution strategy. Beuth [
42] proposes an implicit MPM formulation for quasi-static problems using high order elements and a special integration procedure for partially-filled boundary elements. Finally, Sanchez [
43] presented an implicit MPM for quasi-static problems using a Jacobian free algorithm. In the current work, the displacement-based formulation and the time scheme integration of the MPM algorithm are equivalent to those proposed by [
41]. The GMM is a truly meshless method, based on a Galerkin formulation. Unlike other methods, such as the element-free Galerkin method [
44] or the reproducing kernel particle method [
45], the technique employed in this work does not need element connectivity for integration or interpolation purposes. The algorithm presented in [
37] to simulate fluid-structrue interaction problems is taken as a starting point and adapted to the simulation of deformable solids.
Both MPM and GMM combine a Lagrangian description of the body under analysis, which is represented by a set of particles, the so-called material points, with the use of a computational mesh: a background grid in the case of MPM and a cloud of nodes in the case of GMM. At each time step, the governing equations are solved on the computational nodes, while history-dependent variables and material information are saved on the particles during the entire deformation process. As MPM and GMM do not employ any kind of remeshing procedure, the calculation is performed always at a local level, allowing an easy adaptation of the code to parallel computation. Moreover, the conservation of the mass is guaranteed during the whole simulation time, as the total mass is distributed between the material points, representing the volume of the entire continuum under study. Last but not least, a remapping of the state variables is avoided, and the employment of complex time-dependent constitutive laws can be used without committing any mapping error. Given the fulfillment of the aforementioned featrues, MPM and GMM could represent suitable choices for the solution of real scale large deformation problems, and a comparison within a unified framework would be beneficial for an objective evaluation of the capabilities of each method.
In the present paper, the reader can find a detailed presentation of the implicit MPM and GMM algorithms and, as the main novelty, an assessment of their performance in terms of accuracy, computational cost and robustness. Both techniques employ a variational Galerkin formulation for solid mechanics problems, defined within an updated Lagrangian kinematic framework, under the assumption of finite strains. The first one uses a fixed background grid, which is deformed and then reset at every solution step. The second one uses a purely Lagrangian meshless approach. The two methods differ conceptually only in the way the shape functions and their gradients are computed. Nevertheless, as we shall see, the use of one technique leads to advantages and disadvantages over the alternative one. Both techniques described are implemented by the authors within Kratos Multiphysics [
46,
47], a framework for building multi-disciplinary finite element programs in C++.
The paper is structrued as follows: the governing equations are presented in their strong form in
Section 2, and the algebraic linearized system is obtained in
Section 3. After that, MPM and GMM are briefly revised, and the two algorithms are detailed in
Section 4 to emphasize the similarities with an FEM algorithm. Some benchmark tests are simulated to assess the capabilities of the developed formulations (
Section 5), and finally,
Section 6 provides some concluding remarks and suggestions for futrue research.
2. Governing Equations
Let us consider the body
, which occupies a region
of the three-dimensional Euclidean space
with a regular boundary
in its reference configuration. A deformation of
is defined by a one-to-one mapping:
that maps each point
p of the body
into a spatial point
:
which represents the location of
p in the deformed configuration of
. The region of
occupied by
in its deformed configuration is denoted as
.
The problem is governed by mass and linear momentum balance equations:
where
is the mass density,
is the acceleration,
is the velocity,
is the symmetric Cauchy stress tensor and
is the body force. Acceleration and velocity are, by definition, the material derivatives of the velocity,
, and the displacement,
, respectively. For a compressible material, the conservation of mass is satisfied by:
where
is the density in the undeformed configuration and
is the determinant of the total deformation gradient
=
/
with
and
representing the current and initial position, respectively. Equation (
4) holds at any point and in particular at the sampling points where the equation is written, e.g., the material points. Thermal effects are not considered in the present work, so the energy balance is considered implicitly fulfilled.
The balance equations are solved numerically in a three-dimensional region
, in the time range
, given the following boundary conditions on the Dirichlet (
) and Neumann boundaries (
), respectively:
where
is the unit outward normal.
A constitutive equation for evaluation of the stress-strain relation is also needed to fully define the boundary value problem.
3. Weak Form
In
Section 2, the strong form of the problem has been defined. In this section, the weak form is derived, following the formulation explained in [
48], a displacement-based finite element procedure.
Let the displacement space
be the space of vector functions whose components and their first derivatives are square-integrable; the integral form of the problem is:
where
is an arbitrary test function, such that
,
is the differential volume and
the differential boundary surface. By integrating by parts, applying the divergence theorem and considering the symmetry of the stress tensor, the following expression is obtained:
Under the assumption that the stress tensor is a function of the current strain only:
the problem is reduced to find a kinematically-admissible field
that satisfies:
where
G is the virtual work functional defined as:
3.1. Linearization of the Spatial Weak Formulation
In this work, we attempt to solve the general Boundary Value Problems (BVP), characterized by both geometrical and material non-linearity. When a non-linear BVP is considered, the discretisation of the weak form results in a system of non-linear equations; for the solution of such a system, a linearization is, therefore, needed. The most used and known technique is Newton–Raphson’s iterative procedure, which makes use of directional derivatives to linearize the non-linear equations.
The virtual work functional of Equation (
10) is linearized with respect to the unknown
, using an arbitrary argument
, which is chosen to be the last known equilibrium configuration. The linearized problem is to find
such that:
where
L is the linearized virtual work function and:
is the directional derivative of
G at
in the direction of
, given by:
Under the assumption of conservative external loads, only the terms related to the internal and inertial forces are dependent on the deformation. Using the following definitions:
where
is the strain field at
and
, the directional derivative
reduces to:
which can be split into a static and dynamic contribution.
Under the assumption of finite strains and adopting an updated Lagrangian kinematic framework, the expression of the directional derivative (Equation (
15)) should be derived in spatial form. A common way to do that consists of linearizing the material weak form and in doing a push-forward operation to recover the spatial form [
48]. Therefore, the linearization of the weak form derived with respect to the initial configuration reads:
where
and
are the material and spatial gradient operator, respectively,
is the second Piola–Kirchhoff stress tensor,
is the fourth order incremental constitutive tensor and
is the differential volume element in the undeformed configuration. The linearization of the weak form with respect to the current configuration can be derived by pushing forward the linearization of Equation (
16). The first term can be directly written in terms of the Kirchhoff stress
as:
and using this standard identity
, Equation (
17) can be written as:
The second integral of Equation (
16) can be re-written as:
adopting the transformation of the fourth order incremental constitutive tensor
in Voigt notation [
48]:
where lowercase indexes refer to the incremental constitutive tensor relative to the Kirchhoff stress, while uppercase indexes to the incremental constitutive tensor relative to the second Piola–Kirchhoff stress.
With these transformations, the linearization of the static contribution at the current configuration is:
Considering the definition of the determinant of the deformation gradient:
the following relations hold
where
and
are the Cauchy and Kirchhoff stress tensor, respectively, and
is the incremental constitutive tensor relative to the Cauchy stress. Equation (
16) can now be re-written in the current configuration as:
Equation (
25) represents the linearization of the spatial weak formulation, also known as the updated Lagrangian formulation, since the deformation state
is continuously updated during the non-linear incremental solution procedure, e.g., Newton–Raphson’s method.
3.2. Spatial Discretisation
For the sake of clarity, hereinafter, the
p subscript is used to refer to variables attributed to the material points, while the
I subscript is used to refer to variables attributed to the computational nodes. Let us assume discretizing the continuum body
by a set of
material points and assigning a finite volume of the body
to each of those material points. Thus, the geometrical representation (
) of
reads:
and the integrals of the weak form can be written as:
Let
be a finite element space to approximate
. The problem is now finding
such that:
or using Equation (
25):
The detailed procedure to obtain the linearization of Equation (
29) can be found in [
48].
The final discretized form can be written as:
where
I and
K are the indexes of the finite element’s nodes,
is the spatial gradient of the shape function evaluated at node
I,
is the matrix form of the incremental constitutive tensor
,
is the volume relative to a single material point,
is the surface and
is the deformation matrix relative to node
I, expressed here for a 2D problem as:
The left-hand side of Equation (
30) is given by three addends multiplied by the increment of the unknowns. The first one is commonly known as the geometric stiffness matrix:
while the second term is known as the material stiffness matrix:
and their sum represents the static contribution to the tangent stiffness matrix:
The dynamic component is given by:
and its definition depends on the adopted time scheme as explained in
Section 4.
Finally, the tangent stiffness matrix is given by:
and represents the submatrix relative to one node of the discretisation with dimension
, where
is the number of degrees of freedom of a single node. This matrix can be considered as the Jacobian matrix of the right-hand side of Equation (
30), i.e., the residual
. Equation (
30) can be rewritten in compact form as:
4. MPM and GMM Techniques
In
Section 3, the formulation, common to both techniques, based on an updated Lagrangian description of the continuum, is presented. In this section, the characteristic featrues of MPM and GMM are highlighted.
As previously stated, MPM and GMM are two Lagrangian techniques and share the fact that the material points shall be understood as the integration points of the calculation, each carrying information about the material and kinematic response. Each material point represents a computational element with one single integration point (the material point itself), whose connectivity is defined by either the nodes of the elements in which it falls (in MPM) or by the nodes in the cloud around the material point (in GMM). In both algorithms, the initial position of the material points is chosen to coincide with the Gauss points of an FE grid, and the mass, which remains constant during the simulation, is equally distributed between the material points, falling, initially, in the same element.
In MPM, such material points may migrate from one element of the grid to another. In the evaluation of the FEM integrals, the shape functions are evaluated at the material point location on the basis of the grid element into which the material point falls (
Figure 1a). To prevent mesh distortion, the nodal solution is deleted such that at each time step, the undeformed mesh is recovered; while in GMM, the material points move together with the computational nodes, and the shape functions are evaluated once the surrounding cloud of nodes is defined (
Figure 1b). In this case the nodes preserve their history through the whole simulation, as in FEM.
4.1. MPM
MPM is a particle method, proposed for the first time by Harlow [
49] for the solution of fluid flow problems under a large deformation regime and originally known by the name of the Particle-In-Cell (PIC) method. Some decades after, Sulsky and coworkers presented its extension to solid mechanics problems [
35,
36]. As previously stated, most MPM codes make use of explicit time schemes, which are generally preferable when simulating impacts at high velocities or fast transient problems. In other cases, for example when the driving force is gravity or when the rate of deformation is small, the adoption of an implicit time scheme is the best choice, since the stability of the method (for properly chosen dissipative methods) does not depend on the wave propagation speed within the media, which provides the typical time step limitation for explicit approaches [
50].
In the current work, the displacement-based formulation and the time scheme integration of the MPM algorithm are equivalent to those proposed by [
41].
4.1.1. MPM Algorithm
Traditionally, the MPM algorithm is composed of three different phases [
36], as graphically represented in
Figure 2:
- (a)
Initialization phase(
Figure 2a): At the beginning of the time step, the connectivity is defined for each material points, and the initial conditions on the FE grid nodes are created by means of a projection of material points’ information obtained at the previous time step
;
- (b)
UL-FEM calculation phase (
Figure 2b): The local matrix, represented by the left-hand-side (
) of Equation (
30), and the local vector, constituted by the right-hand-side (
) of Equation (
30), are evaluated in the current configuration according to the formulation presented in the previous section. The global left-hand-side matrix (
) and the global right-hand-side vector (
) are obtained by assembling the local contributions of each material point, and finally, the system is iteratively solved. During the iterative procedure, the nodes are allowed to move, according to the nodal solution, and the material points do not change their local position within the geometrical element until the solution has reached convergence;
- (c)
Convective phase (
Figure 2c): During the third and last phase, the nodal information at time
is interpolated back to the material points. The position of the material points is updated and, in order to prevent mesh distortion, the undeformed FE grid is recovered.
Many featrues of the MPM are connected to the finite element method [
35]. Indeed, Phase b coincides with the calculation step of a standard non-linear FE code, while Phases a and c define the MPM featrues.
At the beginning of each time step (
), during Phase a, the degrees of freedom and the variables on the nodes of the fixed mesh are defined gathering the information from the material points (
Figure 2a).
The momentum
and inertia
on the material points, which are defined as functions of mass
, velocity
and acceleration
:
are projected on the background grid by evaluating in a first step the global values of nodal mass
, momentum
and inertia
as described in Algorithm 1.
Once
,
and
are obtained, it is possible to compute the values of nodal velocity and nodal acceleration of the previous time step as:
It is worth mentioning that the initial nodal conditions are evaluated at each time step using material point information in order to have initial values even on grid elements empty at the previous time step ().
Both Lagrangian techniques presented in this paper make use of a predictor/corrector procedure, based on the Newmark integration scheme.
In MPM, the prediction of the nodal displacement, velocity and acceleration reads:
where the upper-left side index
indicates the iteration counter, while the upper-right index
n the time step.
and
are Newmark’s coefficients equal to 0.5 and 0.25, respectively.
Once the nodal velocity and acceleration are predicted (Equations (
42)–(
44)), the system of linearized governing equations is formulated, according to
Section 3, and the local matrix
and the residual
are evaluated and assembled according to Equations (
30) and (
37), respectively (Phase b,
Figure 2b).
The solution in terms of increment of nodal displacement is found iteratively solving the residual-based system of Equation (
37). Once the solution
is obtained, a correction of the nodal increment of displacement is performed:
Velocity and acceleration are corrected according to Equations (
43) and (
44), respectively. This procedure has to be repeated until convergence is reached.
Unlike an FEM code, the nodal information is available only during the calculation of a time step: at the beginning of each time step, a reset of all the nodal information is performed, and the accumulated displacement information is deleted. The computational mesh is allowed to deform only during the iterative procedure of a time step, avoiding the typical element tangling of a standard FEM. When convergence is achieved, the position of the nodes is restored to the original one (Phase c,
Figure 2c). Before restoring the undeformed configuration of the FE grid, the solution in terms of nodal displacement, velocity and acceleration is interpolated on the material points, as:
where
is the total number of nodes per geometrical element,
are the local coordinates of material point
p and
is the shape function evaluated at the position of the material point
p, relative to node
I.
Finally, the current position of the material points is updated as:
The details of the MPM algorithm are presented in Algorithm 1.
Algorithm 1 MPM algorithm. |
(we will use ) |
Material DATA: E, , |
Initial data on material points: , , , |
Initial data on nodes: NONE - everything is discarded in the initialization phase |
OUTPUT of calculations: |
INITIALIZATION PHASE UL-FEM PHASE CONVECTIVE PHASE
|
4.2. GMM
The GMM is a truly meshless method that can be seen as the application of the MPM idea extended to the case in which both the nodes and the material points behave as purely Lagrangian through the whole analysis. Thus, it is relatively easy to enforce conservation properties at the integration points, while also maintaining the history of nodal results during all the simulation time, provided that a reliable technique is chosen for the computation of the meshless shape functions. The difficulty is, hence, moved to the construction of such an effective meshless base, which is addressed in
Section 4.2.2.
4.2.1. GMM Algorithm
The GMM algorithm is based on three principal steps (see
Figure 3). The initialization phase (
Figure 3a) is the step that mostly distinguishes GMM from MPM. During this phase, the connectivity of each integration point (i.e., each material point) is computed as the “cloud of nodes”, centered on the material point and obtained by a search-in-radius. Such a cloud is then employed for the calculation of the shape functions. Unlike MPM, the Newmark prediction is performed by using the nodal information of the previous time step, like in FEM. Once
N and
are suitably defined, MPM and GMM essentially coincide in the following steps. This is reflected in Steps 2 (
Figure 3b) and 3 (
Figure 3c) of the algorithm being coincident with the MPM case. The details of the GMM algorithm are presented in Algorithm 2.
Algorithm 2 GMM algorithm. |
Material DATA: E, , |
Initial data on material points: , , |
Initial data on nodes: |
OUTPUT of calculations: |
|
4.2.2. Calculation of GMM Shape Functions
While the computation of the shape functions is trivial for the standard MPM, thanks to the presence of a background grid (
Figure 1a), the evaluation of the shape functions in GMM is more complex. From a technical point of view, GMM is based on a conceptually simple operation: given an arbitrary position
in space (which will, in practice, coincide with the position of the material point) and a search radius
R, one may find all of the nodes
I such that
. Given such a cloud of nodes, one may then compute, at the position
, the shape functions
(together with their gradients), such that, a given function
u, whose nodal value is
, can be interpolated at the position
as
(
Figure 1b).
However, in order to construct a convergent solution, some guarantees must be provided by the shape functions. In particular, they shall comply with the Partition of Unity (PU) property, as a very minimum at all of the positions
at which the shape functions are evaluated. A number of shape functions exist complying with such a property [
51,
52,
53]. Among the available options, two appealing classes of meshless functions are considered in our work: the first choice is constituted by the so-called Moving Least Square (MLS) method and the second one represented by the Local Maximum Entropy (LME) technique.
The first technique is based on the MLS approach, first introduced by Lancaster [
54] and Belytschko [
44,
52]. The MLS-approximation fulfills the reproducing conditions by construction, so no corrections are needed.
The fundamental principle of MLS approximants is based on a weighted least square fitting of a target solution, sampled at a given, possibly randomly-distributed, set of points, via a function of the type:
where the coordinates
are to be understood as relative to the sampling position.
The reconstruction of a continuous function
can be obtained considering the data
be located at points
and an arbitrary, smooth and compactly supported, weight function
, such that the
fall within the support of
W. Assuming now that the reconstructed function (
) is computed as:
the fitting to
is done by minimizing the error function
J, defined as:
where
.
This allows defining a set of approximating shape functions
N such that:
where:
with
defined as:
It can be readily verified that the shape functions are able to reproduce exactly a polynomial up to the order used in the construction. This fact can also be used to prove compliance with the partition of unity property. Namely, if one assumes
and substitutes into Equation (
53), then:
Hence, considering the special case of a constant polynomial
or of a linear variation in
,
we obtain, respectively:
A similar reasoning also gives:
thus proving the compliance with the PU property.
However, MLS shape functions are not able to guarantee the Kronecker-delta property at the nodes. This implies that two nodal shape functions may be simultaneously non-zero at a given nodal position. This has practical implications at the moment of imposing Dirichlet boundary conditions, namely in order to impose
at a given point on the Dirichlet boundary
, one must impose that
, which constitutes a classical multipoint constraint [
55].
Interestingly, the choice of different shape functions could ease this particular problem. An appealing choice could be the use of LME approximants, which guarantee complying with a weak Kronecker-delta property until the cloud of nodes is represented by a convex hull.
The LME technique is based on the evaluation of the local max-ent approximants [
56], which represents the solution that exhibits a (Pareto) compromise between competing objectives: the principle of max-ent [
57] subject to the constraints:
and the objective function interpreted as a measure of locality of the shape functions of the Delaunay triangulation:
The solution to the problem can be found minimizing
subjected to the usual constrains. The optimization problem takes the form:
with
representing a non-negative locality coefficient, where
is a dimensionless parameter and
h is a measure of nodal spacing. The value of
is always chosen in a range between
, relative to spread-out meshfree shape functions, and four, relative to linear finite element basis functions. Unlike MLS approximants, the LME basis functions possess the weak Kronecker-delta property at the boundary of the convex hull of the nodes, and they are a
function of
in
. However, the computation of the LME approximation scheme is more onerous than MLS basis functions, as the problem described by Equation (
61) is a convex problem. In
Section 5, a comparison between these two procedures is performed through some benchmark tests, and an assessment in terms of computational cost, accuracy and robustness is provided.
5. Numerical Examples
In this section, three benchmark tests are considered for the comparison of the MPM and GMM formulations. Firstly, the static analysis of a 2D cantilever beam subjected to its self-weight is analyzed, and a mesh convergence study is performed. Secondly, the rolling of a rigid disk on an inclined plane is studied. Finally, a cohesive-soil column collapse is analyzed. All the numerical experiments have been performed on a PC with one Intel(R) Core(TM) i7-4790 CPU at 3.60 GHz.
5.1. 2D Cantilever Beam: Static Analysis
The static analysis of a 2D cantilever beam subjected to its self-weight under the assumption of plain strain is presented. The cantilever beam has a length
m and a square cross-section of unit side (
m) (
Figure 4). The beam is modeled with a hyperelastic material: the density is
kg/m
3; the Young’s modulus is
MPa; and the Poisson’s ratio is
. The results obtained with the MPM and GMM algorithms are compared with a standard FEM code using the same UL formulation.
A mesh convergence study is carried out adopting five different mesh sizes,
h = 0.5, 0.25, 0.125, 0.0625 and 0.01 m, respectively. Quadrilateral elements are used in FEM, MPM and GMM with four integration points per cell (in the case of MPM and GMM, the integration points coincide with the material points). In GMM, the mesh is only initially used for the creation of the material points and then deleted. Regarding the spatial search and the evaluation of the shape functions in GMM, a search radius
, dilation parameters
and
are adopted in GMM-MLS and GMM-LME, respectively. Under the assumption of a linear regime, the vertical deflection at Point A of the free edge can be evaluated analytically according to Timoshenko [
58] as:
where
g is the gravity acceleration,
the inertia of the beam section and
the reduced cross-section area due to the shear effect. However, the solution is computed under the assumption of non-linearity, and as a benchmark solution, the deflection evaluated through the finest mesh is considered. This value is
m and is equally reached by all the methods.
Figure 5 and
Figure 6 compare the solutions obtained with an updated Lagrangian FEM, MPM, GMM-MLS and GMM-LME code, respectively, in terms of vertical displacement and Cauchy stress along the horizontal direction. One can observe that the results are in good agreement for all the methods.
A convergence study is performed to analyse the accuracy of MPM and GMM in comparison with the UL-FEM. The error is evaluated as:
where
is the numerical solution measure at Point A (see
Figure 4).
Figure 7 depicts the error evolution as a function of the inverse of the mesh size
h. It is demonstrated that all the methods have a quadratic rate of convergence. In particular, the UL-FEM, MPM and GMM-MLS error curves coincide. Regarding the error, evaluated with the GMM-LME algorithm, the quadratic rate is maintained, but the curve is shifted a bit upwards, which makes this technique less accurate than GMM-MLS in the benchmark case studied.
5.2. Rolling of a Rigid Disk on an Inclined Plane
The second benchmark test is a rigid disk rolling without slipping on an inclined plane. The geometry of the problem is depicted in
Figure 8. The disk is made of a hyperelastic material: the density is
kg/m
3; the Young’s modulus is
MPa; and the Poisson’s ratio is
.
This test is chosen for an objective assessment of the robustness of the MPM and GMM algorithm. The rolling on the plane implies a contact between the nodes belonging to the inclined plane and the nodes belonging to the disk. In a UL-FEM code, a contact algorithm would be necessary to set this boundary condition. On the contrary, by using either MPM or GMM, the contact is implicitly caught. The analytical acceleration (
a) can be computed imposing the equilibrium of momentum at the contact point
P:
where
g is gravity and
the angle of the inclined plane. Integrating over time, the acceleration, velocity and displacement projected on the x-axis can be obtained as a function of time:
For the study of this test case, the analytical solution of Equation (
66) is used for the assessment of the absolute error obtained with MPM, GMM-MLS and GMM-LME, evaluated as:
where
is the time where the numerical result is calculated. As mesh-based and meshless techniques are compared in a dynamic test, for a more objective comparison, the error is analyzed along with the total computational time needed to finalize the simulation.
A triangular mesh with mesh size m is used for MPM and GMM simulations. In both techniques, the same initial distribution of material points is used, which counts for three initial particles for the cell. Regarding the GMM-MLS the approximants are constructed by adopting a search radius and a dilation parameter . In GMM-LME, the basis functions are evaluated using a search radius and three values of dilation parameter . All the numerical tests are repeated for three different time steps with .
Table 1 shows the results of the analysis, in terms of errors and computational times, performed through MPM, GMM-MLS and GMM-LME.
Regarding the absolute errors, it can be observed that, for a given computational cost, GMM is generally more accurate than MPM, because of the use of smooth basis functions, which provide a better approximation of the unknown variables. In particular, GMM-LME presents smaller errors in comparison to GMM-MLS. In all three cases considered (with ), the errors converge to a unique value at the same computational time, while in the case of GMM-MLS, the advantage of using higher order elements is lost for the smallest delta time. Regarding MPM, it is established that to achieve the same order of accuracy of GMM, a higher computational time must be expected, due to either a finer discretisation in space or in time. However, it is worth highlighting that GMM is much more time consuming than MPM, showing an increment of computational time of in the case of GMM-MLS and from up to in the case of GMM-LME.
In this example, some essential conclusions can be drawn. In
Section 5.1, it was observed that in a static case, the rate of convergence is the same for all the methods under analysis, but the accuracy of GMM-MLS and MPM is better than that of GMM-LME. On the contrary, in a dynamic case with a contact problem, the result is overturned. In fact, a better behavior is noted if LME approximants are employed.
5.3. Cohesive Soil Column Collapse
The third example is the simulation of a soil column collapse. The column is modeled with a cohesive-frictional material, defined by a cohesion
kPa, a friction angle
, an elastic bulk modulus
MPa and a density
kg/m
3. In the current work, a Mohr–Coulomb plastic law in finite strains is considered, and the implicit integration scheme in principal stress space, presented in [
59,
60], is followed for its implementation.
This test has been chosen for the assessment of the robustness of MPM and GMM when the body undergoes really large deformation. The results are compared with the work of [
31], where a Smooth Particle Hydrodynamics method (SPH) is applied to geotechnical problems.
The initial geometry and the boundary conditions are described by
Figure 9.
Quadrilateral elements with an initial distribution of four material points per cell are used in the simulations. Two different mesh sizes are considered:
m (Mesh 1) and
m (Mesh 2). In GMM, the basis functions are evaluated using an initial search radius
, a dilation parameter
and
, in GMM-MLS and GMM-LME, respectively. In this particular case, the procedure for the evaluation of the basis functions in the MLS and LME techniques has been modified to avoid the creation of a non-convex hull of nodes, which might lead to an incorrect set of approximants. This is required because the column is subjected to extremely large deformations. While in the previous examples, a constant radius was used for the definition of the cloud of nodes surrounding a material point, in the current example, a variable radius is adopted to guarantee a minimum number of nodes in each connectivity. In the case of LME, as a Newton iterative procedure is used for the evaluation of the shape functions, a measure of the goodness of the solution is represented by the condition number
of the Hessian matrix
A, defined in [
56]. If
exceeds a user-defined tolerance, the LME algorithm is repeated considering the old connectivity plus an additional node, chosen as the next node closer to the material point. In the case of MLS, it has been sufficient to impose a minimum number of six nodes in each cloud of nodes.
In
Figure 10a, a comparison of the column deformation at different representative time instants is shown. The SPH model taken from [
31] predicts a higher final run-out of the column collapse, while the final configurations at time
s of MPM, GMM-LME and GMM-MLS are almost coincident using Mesh 1 and Mesh 2. It is worth highlighting that GMM-MLS and MPM results of
Figure 10 are always in good agreement. However, this is not the case if the evolution of the equivalent plastic strains is observed (see
Figure 11 and
Figure 12). In the case of GMM-LME, an improvement of the results is noted by using the finer mesh (Mesh 2) in terms of displacements (
Figure 10b) and an equivalent plastic strain distribution (
Figure 13b). Regarding MPM, it is proven that a good approximation can be obtained using both meshes.
In this example, the capability of handling history-dependent materials, such as cohesive-frictional materials, is verified for both methods. It is noted that, in MPM, large deformations can be naturally tracked without modifying the algorithm, and accurate results are obtained also using the coarser mesh. In GMM, despite the remarkable featrues highlighted in the previous benchmark tests, when the continuum undergoes extremely large deformations, special care should be taken in the definition of the MLS and LME approximants. In this regard, a lack of robustness of the GMM algorithm is observed due to the impossibility of guaranteeing a correct evaluation of the shape functions during the whole deformation process without an ad hoc modification of the procedure for the definition of the connectivity. Thus, for the solution of this example, a correction of the algorithm has been performed and verified to work properly, albeit an increase in the computational time is registered. The establishment of a more general procedure is left for futrue work.
6. Conclusions
In the present paper, two particle methods—a material point method and a Galerkin meshless method—are tested and compared to assess their capabilities in solving large displacement and large deformation problems. A variational displacement-based formulation, based on an updated Lagrangian description, is presented, and the algorithms are described in detail.
A comparison of MPM and GMM is performed through three benchmark tests, and the methods are assessed in terms of accuracy, computational time and robustness. The first example is a static cantilever beam. A convergence analysis is performed, and all the techniques have a quadratic convergence rate (compared to an FEM code). Secondly, the dynamic test of a rolling disk on an inclined plane is considered. The robustness of MPM and GMM in dealing with contact between two rigid bodies is tested, and an analysis in terms of computational time and error is performed. We found that GMM, in dynamic cases, has a higher accuracy than MPM, despite a higher computational cost. This is because in MPM, linear basis functions are considered, while in GMM, smooth basis functions are computed allowing one to obtain a superior approximation of the unknown variables. As a last example, a cohesive soil column collapse is analyzed. In this case, we assess the robustness of both methods when the continuum undergoes extremely large deformation. Firstly, it is demonstrated that MPM and GMM can be easily coupled with local plastic laws. Furthermore, it is noted that MPM leads to more accurate results, and the algorithm does not need to be modified in a large deformation case. On the contrary, in GMM, a modification of the algorithm has to be considered to avoid the formation of a non-convex hull of nodes, when the connectivity is defined. Nonetheless, in spite of this modification, a discrepancy in the results is noted, by using either the MLS or the LME technique.
In conclusion, the standard version of MPM represents a good choice to handle problems involving history-dependent materials and large deformations. Regarding GMM, the accuracy of the solution strictly depends on the basis functions chosen. If large deformation of the continuum is not taken into account, this method could be preferred to MPM due to its remarkable featrue in obtaining accurate results in a limited computational time. However, under the finite strain regime, independent of the material to model, the construction of a connectivity in the meshless method becomes more complex, and at least to the authors’ knowledge, a general methodology is still missing to properly define the correct connectivity under any deformation condition. Thus, despite the promising featrues of this approach, an improvement in the robustness of the GMM algorithm is needed to obtain more accurate and reliable solutions in large deformation and failure problems.