Next Article in Journal
A Generalized Method for Deriving Steady-State Behavior of Consistent Fuzzy Priority for Interdependent Criteria
Previous Article in Journal
SEAIS: Secure and Efficient Agricultural Image Storage Combining Blockchain and Satellite Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Family of Conditionally Explicit Methods for Second-Order ODEs and DAEs: Application in Multibody Dynamics

1
Department of Mechanical Engineering, Faculty of Engineering of Bilbao, University of the Basque Country, Alameda Urquijo s/n, 48013 Bilbao, Spain
2
Department of Graphic Design and Engineering Projects, Faculty of Engineering of Bilbao, University of the Basque Country, Alameda Urquijo s/n, 48013 Bilbao, Spain
3
Department of Applied Mathematics, Faculty of Engineering of Bilbao, University of the Basque Country, Alameda Urquijo s/n, 48013 Bilbao, Spain
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(18), 2862; https://doi.org/10.3390/math12182862
Submission received: 29 July 2024 / Revised: 5 September 2024 / Accepted: 12 September 2024 / Published: 14 September 2024
(This article belongs to the Section Dynamical Systems)

Abstract

:
There are several common procedures used to numerically integrate second-order ordinary differential equations. The most common one is to reduce the equation’s order by duplicating the number of variables. This allows one to take advantage of the family of Runge–Kutta methods or the Adams family of multi-step methods. Another approach is the use of methods that have been developed to directly integrate an ordinary differential equation without increasing the number of variables. An important drawback when using Runge–Kutta methods is that when one tries to apply them to differential algebraic equations, they require a reduction in the index, leading to a need for stabilization methods to remove the drift. In this paper, a new family of methods for the direct integration of second-order ordinary differential equations is presented. These methods can be considered as a generalization of the central differences method. The methods are classified according to the number of derivatives they take into account (degree). They include some parameters that can be chosen to configure the equation’s behavior. Some sets of parameters were studied, and some examples belonging to structural dynamics and multibody dynamics are presented. An example of the application of the method to a differential algebraic equation is also included.

1. Introduction

The topic of this paper is the numerical integration of second-order ordinary differential equations (ODEs) in the following form:
x ¨ ( t ) = g ( x ˙ ( t ) , x ( t ) , t ) ,
with the following set of initial conditions:
x ( t 0 ) = x 0 , x ˙ ( t 0 ) = x 0 ˙ .
These equations arise in several fields of engineering [1]. Among them, in the field of mechanical engineering, there are two important problems that are described by these kinds of equations: structural dynamics and multibody dynamics [2,3,4]. Oddly enough, the usual way to deal with these equations in structural dynamics is completely different to that used in multibody dynamics. In structural dynamics, the equilibrium equation is usually solved by means of methods that directly integrate the second-order differential equation [5]. These methods are often classified among explicit and implicit methods, but this classification does not exactly comply to the rigorous explicit and implicit classification used in methods for first-order ODEs. In structural dynamics, explicit methods are those that formulate the equilibrium in t to obtain the solution in t + t [2,5]. This is due to the fact that these methods were initially developed for structural linear dynamics and, thus, methods that lead to implicit equations in the general case become explicit due to the particular conditions of the problem. Implicit methods usually employed in structural mechanics include the Newmark method [6], the HHT method [7] and the Wilson-ө method [8]. The most typical explicit method employed in structural dynamics is the central differences method [5], which is the core of commercial software such as ANSYS® LS-DYNA®. In the case of multibody dynamics, the usual approach has, for a long time, been the reduction in the order of the differential equation so that any method suitable for first-order differential equations can be used [3,4,9]. Usually, these come in the form of Runge–Kutta integrators. More recently, the use of structural methods has gained considerable relevance in the field, mainly using implicit methods [10,11,12,13], although there are some explicit implementations [14,15]. The main advantage of structural methods over Runge–Kutta (RK) methods is that structural methods only need one evaluation of the function per time step, while RK methods usually require more than one [16]. This advantage is also present in the Adams family of methods, but in these methods, an increase in convergence is obtained by increasing the number of previous steps involved in the resolution of the current step. This hampers the stability behavior of the method. Additionally, the use of RK methods in DAEs is usually performed by reducing the DAE index and, thus, the introduction of a stabilization technique to avoid the drift is required [3,17,18]. This stabilization method might also limit the time step because it can introduce additional stability issues [19]. This has probably been the key reason for this recent interest, because methods such as central differences do not require a reduction in the DAE index (see ref. [20]).
It is also important to mention that there are other alternatives applicable to conservative systems. Among them, one can find the Störmer–Verlet methods, including Velocity–Verlet (ref. [21,22,23]). Also, in the same conditions, one can find the Leapfrog integration method (ref. [24]), the Yoshida method (ref. [25]) and the Beeman method (ref. [26]).
The separate development of structural integrators and Runge–Kutta integrators has led to different philosophies. Since structural methods, as mentioned, have typically been developed for a particular problem, they have been developed with an application in mind. In contrast, Runge–Kutta methods have been developed with a generic approach in mind. As a consequence, for example, different approaches to analyzing stability have been used. In the case of structural dynamics, the stability analysis is focused on obtaining the maximum time step that can be used in the process [6]. In order to achieve this, the analysis takes advantage of the particularities of the problem. In the case of Runge–Kutta methods, there are several types of stability that are considered, allowing one to study the method regardless of the particular physical phenomenon that is described by the equation that needs to be integrated (see ref. [27]). This is of a broader scope, but it may be less practical.
Among the structural methods, the use of the second-order central differences method is of broad use in structural dynamics [28,29]. This is due to the high nonlinearity of the problem and the small time steps required to properly handle events such as contacts. In these methods, higher-order convergence is not usually a priority, and the second-order central differences method presents second-order convergence. In any case, a method with similar characteristics along with higher-order convergence would be of interest.
In this paper, the central differences method is extended in order to reach a configurable method (in a similar way to the Newmark or HHT methods). Furthermore, more complex (here named higher-degree) versions of the method are proposed and discussed. In order to show the behavior of the newly devised methods, some examples are also presented. The initial steps in this work were developed in the framework of the Ph. D. thesis of Dr. Haritz Uriarte [30]. The preliminary ideas behind the method presented here are presented in [31].
The organization of this paper is as follows. First, a brief summary of the classical central differences method is presented, including some important considerations regarding its classification. Afterwards, some hints for reducing the cancellation error are presented. These hints allow one to devise a generalization of the central differences method, which will be discussed. The behavior of this generalization is explained afterwards. A further generalization that can be used to improve the convergence is then introduced along with its performance. Some examples of the resolution of ODEs and DAEs are presented. Finally, some conclusions are drawn, and future works will be addressed.

2. The Central Differences Method for Structural Dynamics

Although this paper deals with equations in the more general form defined by the set of Equations (1) and (2), the central differences method was tailored to solve second-order ODEs in the following particular form:
M x ¨ ( t ) + C x ˙ ( t ) + K x ( t ) = f ( t ) ,
with the set of initial conditions exposed in Equation (2).
In this equation, M , C and K are the mass, damping and stiffness matrices of a structural or multibody dynamic problem. f ( t ) is the applied external force, and x ( t ) determines the position of the Degrees of Freedom (DoF) of the problem.
Initially, the central differences method was developed with linear systems in mind, but currently, one can find implementations where M , C and K can be functions of x ˙ ( t ) , x ( t ) and t (see [5,14], for examples). In these conditions, the most general situation can be considered, where M = M x , x ˙ , t , C = C x , x ˙ , t and K = K x , x ˙ , t . A consideration like this will be made here. In cases where M presents degeneracy, two possibilities can arise. If this degeneracy comes about because of redundancy in the equations and the number of equations is equal to the number of variables, the system lacks a single solution. This is out of the scope of this document. In the other case, the problem should be reformulated as a DAE. In cases of near degeneracy, again, two cases can arise. The first one is related to a considerable difference in the natural frequencies of the system, and this is a stability-related issue. The second one might be due to a problem of scale, which will not be addressed in this document.
The original formulation of the method uses the following equations:
x ˙ ( t ) = x ( t + Δ t ) x ( t Δ t ) 2 Δ t
and
x ¨ ( t ) = x ( t + Δ t ) 2 x ( t ) + x ( t Δ t ) Δ t 2 .
with t being the step size used in the integration. By substituting this into Equation (3), it is easy to reach the following:
1 Δ t 2 M + 1 2 Δ t C x t + Δ t = = f ( t ) + 1 Δ t 2 M + 1 2 Δ t C x ( t Δ t ) K 2 Δ t 2 M x ( t ) .
This expression is applied step by step in order to reach, at each step t , the value of x for the next one, x t + t . Obviously, in the first step, one does not know the value of x t 0 + t , but, instead, the value x ˙ ( t 0 ) is known. Taking into account Equations (3)–(5) for this situation, one can reach the following:
2 Δ t 2 M x t 0 + Δ t = f t 0 + 2 Δ t M C x ˙ t 0 K 2 Δ t 2 M x t 0 .
There are also alternatives to this method for variable time steps. For simplicity, in this document, only fixed-step approaches are considered.
This method is quite useful for structural dynamics. It features extremely low costs and predictable stability. It is useful not only for linear problems but also for non-linear ones; this, however, might lead to an iterative scheme if any of the matrices or the force vector depend on the velocity. This is due to the fact that Equation (6) is implicit only if no dependence on x ˙ appears in its right-hand side. Therefore, the method can behave as explicit or implicit. This is the reason why the term “conditionally explicit” will be used for methods showing this behavior in this document. The method is usually classified as explicit in the structural analysis field. This can be misleading. The reason behind this classification is probably the fact that, for structural linear and nonlinear dynamics where no solid–rigid gyroscopic effects are considered, the matrices M , C and K and the vector f t do not depend on x ˙ t and, thus, the resultant problem behaves like an explicit one. However, this is an inconvenience, because the method itself can also be applied in cases where a dependence on x ˙ t exists [14]. In those cases, the method behaves as an implicit one. However, even in that case, the fact that there is dependence on x ˙ t but not on x t has some advantages, because a dependence on x ˙ t is not as common as a dependence on x t . For example, M does not depend on x ˙ t . Thus, strictly speaking, the method is implicit, but due to the usual field of application, it does not seem quite incorrect to classify it as explicit.
In structural linear dynamics, M , C and K do not usually depend on x ˙ , and the method behaves as an explicit one. In structural nonlinear dynamics, they depend on x t , and in 3D multibody dynamics, C also depends on x ˙ t . In this latter case, the method behaves as an implicit one.
As formulated in Equations (6) and (7), the method delivers the vector of the variables without the need to store derivatives. This is quite convenient for structural dynamics, where the vector of the variables is composed of nodal displacements, which, in turn, are used to obtain strains and stresses. However, this might be a problem if one wants to generalize the method to other problems, such as multibody dynamics. In this field, velocities and accelerations are of concern, and, therefore, the use of Equations (4) and (5) is necessary after the integration process. This is a major inconvenience, because these equations lead to heavy cancellation problems, which might limit the maximum achievable precision. For example, let us consider Equation (4). If the time step is reduced, the integration method will increase its precision, but the numerical difference among x t + Δ t and x t Δ t will diminish. Thus, the number of significant digits in its difference will be reduced and, therefore, the number of valid digits obtained in Equation (4) will be reduced. Thus, although the integration quality will improve, the obtained velocity values will degrade. In any case, if a high degree of precision is not needed, one can successfully use the scheme defined in the above equations [14,15].
Another issue is that this formulation, as demonstrated, is only suitable for equations of a second order in the form of Equation (3). Although one could write any second-order ODE in this form by using the Taylor expansion, this requires additional work.

3. A Reformulation of the Central Differences Method to Avoid Cancellation

As demonstrated before, the problem of cancellation appears because of the use of Equations (4) or (5). This is an inconvenience, for example, in multibody dynamics, because C depends on the velocity, and the use of Equation (4) might lead to a considerable loss of precision due to cancellation. On the other hand, this is not usually a concern in structural dynamics.
In order to avoid cancellation problems, one can reformulate the problem so that the result in each iteration is obtained in accelerations. The reformulation of the method can be obtained by a simple manipulation of Equations (4) and (5) and by storing in each iteration, at least, both x and x ˙ ( x ¨ is not needed).
By adding Equation (4) multiplied by 2 Δ t and Equation (5) multiplied by Δ t 2 , one reaches the following:
2 Δ t x ˙ t + Δ t 2 x ¨ t = 2 x t + Δ t 2 x t .
Which is obviously an expression of the truncated Taylor series for the displacement in t + Δ t :
x t + Δ t = x t + Δ t x ˙ t + Δ t 2 2 x ¨ t .
If the same is carried out whilst eliminating x t + Δ t , one reaches the expression of the truncated Taylor series in t Δ t :
x t + Δ t = x t + Δ t x ˙ t + Δ t 2 2 x ¨ t .
This means that, in fact, the use of the central differences approach is equivalent to using Equations (9) and (10).
From Equation (9), but formulated for the previous step, the following can be obtained:
x t = x t Δ t + Δ t x ˙ t Δ t + Δ t 2 2 x ¨ t Δ t .
And, combining Equation (11) with (10) gives:
x ˙ t = x ˙ t Δ t + Δ t 2 x ¨ t Δ t + Δ t 2 x ¨ t .
Finally, introducing Equation (11) in (9) gives:
x t + Δ t = x t + Δ t x ˙ t Δ t + Δ t 2 4 x ¨ t Δ t + Δ t 2 4 x ¨ t .
The use of Equations (9) and (12) leads to a central differences implementation but without cancellation problems. This is similar to using a single-step method, but in common single-step methods, one obtains x t + Δ t , x ˙ t + Δ t and x ¨ t + Δ t from x t , x ˙ t and x ¨ t . In this approach, instead, in each step, one obtains x t + Δ t , x ˙ t and x ¨ t from x t , x ˙ t Δ t and x ¨ t Δ t .
Rigorously speaking, it could be stated that this is a two-step method, because it uses values taken in two time instants ( t and t Δ t ) to obtain results in t and t + Δ t . It is also interesting to mention that the central differences method also has some resemblance to an implicit Adams scheme but including derivatives along with the variable itself.

Influence of the Cancellation Problem

It looks interesting at this point to show the influence of using Equations (9) and (12) (reformulated central differences) instead of Equations (4) and (5). In order to achieve this, the example of the simple planar pendulum described in the IFTOMM Multibody Benchmark will be used. In this example, one is required to solve the free oscillations of a simple pendulum composed of a massless rod of a length of 1 m and with a point mass of 1 kg at its end.
The system moves under the effects of gravity. In this example, the problem is solved by using a single-parameter modelization, as shown in Figure 1. The resolution was implemented in C by the authors, as were all the problems presented in this document. This leads to a single-DOF ODE. The differential equation to be solved is as follows:
θ = g L c o s θ .
The beauty of the problem lies on the fact that the forces are conservative and, thus, energy conservation is a good measurement of the error. The proposal on the IFTOMM Multibody Benchmark is to simulate 10 s while keeping the energy drift below 5 × 10 5 J. In this case, instead, the aim is to see the maximum precision that can be obtained from both approaches. In Table 1, the maximum energy drift obtained using the classical approach and the reformulated one can be seen. As shown, for large time steps, the error coincides, but as the time step is reduced, the error in the classical central differences formulation is increased when compared to the reformulated version. In this problem, this error only manifests itself in the computation of the velocities and acceleration due to the heavy cancellation in Equations (4) and (5). But, in the general case, it can also introduce errors in the displacements if the differential equation depends on the velocity. It is interesting to see that below a particular time step, the reformulated version also reduces its precision too, but this reduction does not seem to increase as fast as in the case of the classical approach. Obviously, in both cases, the computational cost is about the same.

4. A Generalization of the Central Differences Method

Although the resolution of the cancellation problem is a justification in itself for the reformulation of the problem, it also leads to an interesting idea. Taking a look at Equations (9) and (12), one can modify them so that an alternative method can be devised. Let us consider the following:
x t + Δ t = x t + Δ t x ˙ t + Δ t 2 2 α x ¨ t + 1 α x ¨ t Δ t
and
x ˙ t = x ˙ t Δ t + Δ t β x ¨ t + 1 β x ¨ t Δ t .
The use of the configurable parameters α and β in Equations (15) and (16) are similar to those employed in the Newmark method. These parameters allow one to modify the convergence and stability characteristics of the algorithm. One could argue that Equation (15) lacks coherence, because it should weight x ¨ t + Δ t and x ¨ t , but that would lead to a couple of drawbacks. First, one would need x ¨ t + Δ t , which, in the approach considered here, is obtained in the following time step. Second, it would not be possible to take advantage of the parameters to modify the order of the error in each time step. On the other hand, one can reach the following:
x ¨ t + Δ t = x ¨ t + Δ t x t + O Δ t 2
and
x ¨ t = x ¨ t Δ t + Δ t x t Δ t + O Δ t 2 .
Thus, the error committed in Equation (15) is, at most, O Δ t 3 .

4.1. Convergence Analysis

The third-degree method has a first- or second-order convergence.
To demonstrate this, the convergence analysis of the method is performed here. We will resort to a single variable. The order of the convergence is obtained as the lowest error order in the variable and its first derivative minus one. If one considers the known values of one time step as exact and denotes the approximated values in the form of x ¯ , one arrives at the following equations for each time step:
x ¯ ¨ t = g x ¯ ˙ , x , t
x ¯ t + Δ t = x t + Δ t x ¯ ˙ t + Δ t 2 2 α x ¯ ¨ t + 1 α x ¨ t t
x ¯ ˙ t = x ˙ t Δ t + Δ t β x ¯ ¨ + 1 β x ¨ t t
One can write:
x ¯ ¨ t = g x ¯ ˙ , x , t = g x ˙ , x , t + g x ˙ x ˙ , x , t x ¯ ˙ x ˙ + O x ¯ ˙ x ˙ 2 .
So:
x ¯ ¨ t x ¨ t = g x ˙ x ˙ , x , t x ¯ ˙ x ˙ + O x ¯ ˙ x ˙ 2 .
On the other hand, the precise expression for the first derivative is:
x ˙ t = x ˙ t Δ t + Δ t x ¨ t Δ t + Δ t 2 2 x t Δ t + O Δ t 3 .
Subtracting from Equation (21) gives the following:
x ¯ ˙ t x ˙ t = Δ t β x ¨ t x ¨ ( t Δ t Δ t 2 2 x t + O Δ t 3 .
By substituting Equation (25) into Equation (23), it is easy to find out that x ¨ t has an error of O Δ t 3 . In these terms:
x ¯ ˙ t x ˙ t = Δ t β x ¨ t x ¨ t Δ t Δ t 2 2 x t + O Δ t 3 .
Taking into account the following:
x ¨ t = x ¨ t Δ t + Δ t x t Δ t + O Δ t 2 .
One reaches:
x ¯ ˙ t x ˙ t = Δ t 2 β 1 2 x t + O Δ t 3 .
Thus, the error order in the derivative of the function is 3 if β = 1 / 2 or 2 in the other case. Resorting now to Equation (19), the precise expression is:
x t + Δ t = x t + Δ t x ˙ t + Δ t 2 2 x ¨ t + Δ t 3 6 x t + O Δ t 4 .
Subtracting from Equation (20) and taking into account the fact that the error in x ¯ is of a higher order:
x ¯ t + Δ t x t + Δ t = Δ t x ¯ ˙ t x ˙ t + Δ t 2 2 α 1 x ¨ t x ¨ t Δ t Δ t 3 6 x t + O Δ t 4 .
One can write:
x ¨ t = x ¨ t Δ t + Δ t x t Δ t + O Δ t 2 .
This leads to:
x ¯ t + Δ t x t + Δ t = Δ t 3 β 1 2 + α 1 2 1 6 x t + O Δ t 4 .
The error order in the variable is 4 if:
β 1 2 + α 1 2 1 6 = 0 .
And it is 3 in the other case. Taking this into account, the order of the convergence is determined by the derivative of the variable, because it will be always equal or lower than that of the variable. Thus, the convergence of the third-order algorithm is 2 if β = 1 / 2 and 1 in the other case.
If β = 1 2 is chosen, one can eliminate the third-order error by using α = 4 3 . Is this modification useful? If one considers the usual definition of the error order for second-order ODEs, as shown in [1], the order of the method is equal to the lower order that appears in the variable and its first derivative. In these terms, increasing the error order of the variable by using α = 4 3 will not change the overall order. But, α can affect the stability.

4.2. Stability Analysis

In order to study the stability, one can formulate the equations in the following form:
x t + Δ t x ˙ t x ¨ t = A x t x ˙ t Δ t x ¨ t Δ t + b .
Here, the typical stability analysis performed for structural dynamics is posed, so the aim is to obtain the required Δ t for the integration to be stable. In order to simplify the problem, null damping is considered. This is quite common in structural dynamics because structural damping is always positive, and situations where a negative damping appears (self-excited vibrations) are usually to be avoided, so they are not usually integrated. Furthermore, they lead to an exponential growth of the vibrations. It is also considered a scalar problem (the case of vectorial problems can be always reduced to a set of scalar problems and, thus, the results are easily applicable). Thus, equations in the form of:
x ¨ t = g x ˙ , x , t
are considered.
In the case of structural dynamics, it is easy to see that:
d g x ˙ , x , t d x = k m = ω 2 .
with k being the dynamic stiffness of the system and m its mass. This means that ω is the natural frequency of the system. Note that this approach is general, and one could just substitute ω by d g x ˙ , x , t d x to apply the analysis to any second-order differential equation. Note that, in the general case, ω = ω t and, thus, the stability limit might vary along the integration process. Under these conditions, one can reach the following:
A = 1 α + 2 β 2 Δ t 2 ω 2 Δ t 3 2 α + 2 β 2 Δ t 2 β Δ t ω 2 1 1 β Δ t ω 2 0 0 .
In order for the method to be stable, the spectral radius of matrix A must be lower than one. The characteristic polynomial of A is as follows:
λ 3 + β + α 2 Δ t 2 ω 2 2 λ 2 + 3 2 α + β Δ t 2 ω 2 + 1 λ + α 1 2 Δ t 2 ω 2 .
By introducing λ = 1 , one obtains the stability limits. For α = 1 and β = 1 2 , it is easy to find that Δ t 2 ω . This is predictable, because these parameters lead to the reformulated central differences approach. The set of parameters α = 4 3 and β = 1 2 require Δ t 12 5 1 ω = 1.549 1 ω for the method to be stable. As will be shown later, an interesting phenomenon appears with α = 2 and β = 1 2 . In this case, Δ t 4 3 1 ω = 1.1547 1 ω .
In order to generalize the results to systems of differential equations, one can obtain the natural frequencies of the system and use the largest ω , ω m a x , to obtain the limit. One can also use techniques to obtain a bound on the largest ω m a x to approximate this limit, as is usually performed in finite-element analysis when applying the central differences method or the conditionally stable Newmark configurations.

4.3. Behavior of the Method

In order to check the method’s performance, a couple of simple examples will be shown. The first example is the simple pendulum demonstrated before, while the second is devised to verify the stability behavior. Three parameter sets will be considered. All of them share β = 1 2 , while α was chosen to take values of 1 (reformulated central differences method), 4 3 (theoretical best) and 2. α = 1 behaves obviously like the reformulated central differences method shown before (Table 2).
The results come with a surprise. The use of α = 4 3 delivers a moderate improvement, as expected, but the use of α = 2 leads to an unexpected and considerable improvement in the results. However, the order of the convergence does not seem to change. The use of values for β that are different to 1 2 does not lead to improvements in precision or stability at the cost of the other.
Another interesting phenomenon is that the saturation of the floating-point precision behaves the same in all methods. This means that for quite small values of Δ t , the floating-point precision is unable to lead to better results. This is not quite unexpected, because there is no reason for this phenomenon to depend on the configuration parameters.
The next example is a simple one-degree-of-freedom (DOF) vibration model. The reason behind the use of this problem is the fact that the system can easily be tuned so that the natural frequency and, thus, the stiffness of the problem are different. Furthermore, it is a classical problem in linear structural dynamics, and as such it covers a different scope from that of the pendulum example (more related to multibody dynamics) while keeping its simplicity (Figure 2).
The idea is that if f t is harmonic with a very low frequency when compared to ω , one could numerically solve the particular solution of the response with a noticeable Δ t , but this will compromise the stability. In the example, the following is used:
f t = s i n ω ¯ t .
The considered frequencies are ω ¯ = 1 and ω ¯ = 0.01 . In order for the transitory to be as low as possible, the initial values are x ˙ ( o ) = ω ω 2 ω ¯ 2 and x 0 = 0 . Taking into account the force frequency, a good value of Δ t to solve the problem could be computed in the form of Δ t = 2 π 10 ω in order to obtain 10 result points for each load cycle. Nevertheless, limitations come in the form of stability. In Table 3, one can find the theoretical limits for the stability. In Figure 3, one can see the spectral radius of the stability matrix for the indicated parameter sets. The algorithm is stable provided that the spectral radius is equal to or lower than one. In terms of stability, the central differences method is better than the other presented configurations. In Figure 4, the spectral radius for different typical methods is shown, including the central differences method. As one can see, being unconditionally stable, HHT is obviously the best in terms of stability. Rk4 behaves quite well but at the cost of more evaluations of the error function.
Thus, the problem was tested with all of these parameters for values of the time step of 2.1, 2, 1.54 and 1.15. This allows one to check if the theoretical stability values match the results.
As expected, the use of Δ t = 2.1 led to instability in all of the cases. The use of Δ t = 2.0 led to stable results only in the case of α = 1 . and β = 0.5 (see Figure 5 and Figure 6). The use of higher values of α led to a behavior similar to that shown in Figure 6. A step size of Δ t = 1.54 allowed for the use of α = 4 3 but was still unstable for α = 2 . As predicted, all the cases were stable with Δ t = 1.15 , with similar results. This is a clear case of stability-limited problem, where an implicit and unconditionally stable method would have been a better choice.

5. Higher-Degree Methods

An explanation of the term higher-degree method is due here. In this paper, the term degree is related to the number of derivatives (including the variable itself) taken into account in the integration process. Thus, the methods derived from Equations (19)–(21) would be named third-degree methods. A similar approach was presented in reference [32]. In that case, it was applied to an explicit method.
One can guess that it is possible to generalize the ideas stated before to devise an even-higher-order method. In order to achieve this, one can formulate the following equations:
x t + Δ t = x t + Δ t x ˙ t + Δ t 2 2 x ¨ t + Δ t 3 6 α x t + 1 α x t Δ t ,
x ˙ t = x ˙ t Δ t + Δ t x ¨ t Δ t + Δ t 2 2 1 β x t Δ t + β x t ,
x ¨ t = x ¨ t Δ t + Δ t 1 γ x t Δ t + γ x t ,
and
x ¨ t = g x ˙ t , x t , t .
This would conform a fourth-degree method. One could also use an equation in the following form:
x t = g ˙ x ¨ t , x ˙ t , x t , t .
This would remove the need for Equation (42), but this has some drawbacks. First, one has either to analytically or numerically derive Equation (43). If this is carried out numerically, this is equivalent to Equation (42) with γ = 0 . Second, one cannot take advantage of γ to reduce the error or improve the stability.

5.1. Convergence of the Fourth-Degree Method

The fourth-degree method presents a second- or third-order convergence.
In order to analyze the convergence, one can assume that the previous results are obtained without error:
x ¯ t + Δ t = x t + Δ t x ¯ ˙ t + Δ t 2 2 x ¯ ¨ t + Δ t 3 6 1 α x t t + α x ¯ t ,
x ¯ ˙ t = x ˙ t Δ t + Δ t x ¨ t Δ t + Δ t 2 2 1 β x t Δ t + β x ¯ t ,
x ¯ ¨ t = x ¨ t Δ t + Δ t 1 γ x t Δ t + γ x ¯ t ,
x ¯ ¨ t = g x ¯ ˙ t , x t , t .
The precise equation for accelerations is as follows:
x ¨ t = x ¨ t Δ t + Δ t x t Δ t + Δ t 2 2 x 4 t Δ t + O Δ t 3 .
By subtracting Equations (47) and (49) and assuming that the error in x ¯ t is negligible (this can be demonstrated in a similar manner to that of the third-degree method), the following can be obtained:
0 = Δ t γ x t Δ t + γ x ¯ t Δ t 2 2 x 4 t Δ t O Δ t 3 .
One can state:
x t = x t Δ t + Δ t x 4 t Δ t + O Δ t 2 .
Thus:
0 = Δ t γ x t + γ Δ t x 4 t Δ t + γ x ¯ t Δ t 2 2 x 4 t Δ t O Δ t 3 .
And:
x ¯ t x t = Δ t 1 γ 1 2 γ x 4 t Δ t + O Δ t 2 .
If one is to eliminate the first-order error in this equation, a value of γ = 1 2 should be used, but this is of little interest. As for the velocity equations, the precise expression is:
x ˙ t = x ˙ t Δ t + Δ t x ¨ t Δ t + Δ t 2 2 x t Δ t + Δ t 3 6 x 4 t Δ t + O Δ t 4 .
By subtracting Equation (53) from Equation (45):
x ¯ ˙ t x ˙ t = β Δ t 2 2 x t Δ t + β Δ t 2 2 x t Δ t 3 6 x 4 t Δ t + O Δ t 4 .
One can write:
x t = x t Δ t + Δ t x 4 t Δ t + O Δ t 2 .
Leading to:
x ¯ ˙ t x ˙ t = β Δ t 2 2 x ¯ t x t Δ t + Δ t 3 β 2 1 6 x 4 t Δ t + O Δ t 4 .
By incorporating Equation (53), the following can be obtained:
x ¯ ˙ t x ˙ t = Δ t 3 β 4 γ 1 6 x 4 t Δ t + O Δ t 4 .
This demonstrates that the local error in x ˙ t is, at most, of the order Δ t 3 . If one is to cancel the error of order Δ t 3 , β = 2 3 γ is needed.
The final issue is in the values of the function. One can write:
x t + Δ t = x t + Δ t x ˙ t + Δ t 2 2 x ¨ t + Δ t 3 6 x t + Δ t 4 24 x 4 t Δ t + O Δ t 5 .
And thus:
x t + Δ t x ¯ t + Δ t = Δ t x ¯ ˙ t x ˙ t + + Δ t 3 6 x t α x ¯ t 1 α x t t + Δ t 4 24 x 4 t Δ t + O Δ t 5 .
By introducing:
x t = x t Δ t + Δ t x 4 t Δ t + O Δ t 2
One reaches:
x t + Δ t x ¯ t + Δ t = Δ t x ¯ ˙ t x ˙ t + + Δ t 3 6 x t α x ¯ t 1 α x t Δ t x 4 t Δ t x t t   + + Δ t 4 24 x 4 t Δ t + O Δ t 5 .
And:
x t + Δ t x ¯ t + Δ t = Δ t x ¯ ˙ t x ˙ t + Δ t 3 6 α x ¯ t x t Δ t 4 1 24 + 1 α 6 x 4 t Δ t + O Δ t 5 .
Using Equation (53) gives:
x t + Δ t x ¯ t + Δ t = = Δ t x ¯ ˙ t x ˙ t + Δ t 4 α 6 γ 1 2 γ + 1 24 + 1 α 6 x 4 t Δ t + O Δ t 5 .
And introducing Equation (58), the final expression is:
x t + Δ t x ¯ t + Δ t = Δ t 4 β 4 γ α 12 γ + 1 24 x 4 t Δ t + O Δ t 5 .
Thus, the fourth-order method is either second-order (if β = 2 / 3 γ ) or third-order.

5.2. Stability of the Fourth-Degree Method

Let us now focus on the stability of the method. An equation in the form Equation (65) is needed:
x t + Δ t x ˙ t x ¨ t x t = A x t x ˙ t Δ t x ¨ t Δ t x t Δ t + b .
If one rearranges Equations (40)–(43), the following can be obtained:
A = 1 α + 3 β + 3 γ 6 γ Δ t 2 ω 2 Δ t 1 α + 3 β 6 γ Δ t 2 2 3 α + 3 β 6 γ Δ t 3 β 2 γ Δ t ω 2 1 1 β 2 γ Δ t 1 2 β 2 γ Δ t 2 ω 2 0 0 0 ω 2 Δ t γ 0 1 Δ t γ 1 1 γ .
The problem when using α = 5 4 , β = 1 3 and γ = 1 2 is that the system is always unstable, no matter how small Δ t is. Oddly enough, with these parameters, the spectral radius tends asymptotically to 1 as Δ t decreases. This means that for small values of Δ t and if the total time of the integration is not large, these parameters will lead to acceptable results. Unfortunately, extreme care should be taken in this case. A good choice for the parameters seems to be α = 1 4 , β = 1 3 and γ = 1 2 . In these conditions, one can obtain stable behavior provided that Δ t < 1.264911 ω , with quite good results. Surprisingly enough, as one increases α , the results and stability improve, reaching a maximum at α = 3 4 . In this case, Δ t < 1.7310020041 ω . This is nearly as good as the central differences method in terms of stability but with an improved order of convergence. Unfortunately, higher values of α lead to instability, provided that β = 1 3 and γ = 1 2 . In Table 4, one can see the results provided by these parameter sets in the simple pendulum problem. Since jerk was required in the initial step, it was introduced as x 0 = 0 .
The results obtained using α = 5 4 are impressive. It more than doubled the number of significant digits when compared to the classical central differences approach. As stated before, these results come with a considerable handicap: the method is unstable, as one can see in Figure 7. On the other hand, the results obtained with α = 3 4 are of considerable interest. While it has similar stability limits, it outperformed the classical central differences approach by a considerable margin, and it also outperformed the best parameter set obtained in the second-degree formulation. In Figure 7, one can see that the stability region is similar (slightly better) than that of the third-order Runge–Kutta method, which exhibited the same convergence but required three evaluations of the error function per step.
An interesting phenomenon can be seen for very small time steps. In these cases, the error starts to grow in the same way as happened in the original central differences formulation. This happens because when using (41)–(43) to obtain x t , one reaches:
x t = 1 γ x ¨ t 1 γ x ¨ t Δ t + Δ t 1 γ γ x t Δ t .
This leads to cancellation for small time steps. It is also interesting to show the behavior of the method with α = 5 / 4 . As stated before, the method is unstable, because the spectral radius is larger than one no matter the value of Δ t used, but it also decreases as Δ t decreases. In order to check how it translates to real-life problems, a one-DOF model similar to that used to check the stability of the second-degree methods was devised. In this case, the parameters ω = 1 and ω ¯ = 10 were used. The total integration time was 1000 s (Figure 8).
Surprisingly enough, the integrator was capable of performing 1000 s of integration with a time step of 0.01 with no signs of instability. It was checked whether instability was noticeable at about 5000 s of integration. A time step of 0.001 showed instability at about 50,000 s of integration. In Table 5, one can see the values of the spectral radii and the integration times, where the instability is clearly visible. Obviously, the latter were approximated and should not be used as a reference for the total integration time without further analysis, because they might considerably change from problem to problem.
The question arises quickly as to whether it is possible to introduce a small perturbation to the parameters so that one can stabilize the method. The answer is yes. One can find that when using γ = 1 2 + 0.0005 , the spectral radii are kept (at least numerically) equal to or below 1 for values of Δ t ω < 0.001 . Furthermore, the use of β = 1 3 + 0.01 along with γ = 1 2 + 0.0005 leads to the same situation for values of Δ t ω < 0.07 . The problem is that these modifications, although they might look quite small, come at the cost of precision, and they render the method with a precision similar to that of α = 1 4 or even worse.

5.3. Fifth-Degree Method

One can be tempted to employ Equations (68)–(72) to arrive to fifth-degree methods by using the following:
x t + Δ t = x t + Δ t x ˙ t + Δ t 2 2 x ¨ t + Δ t 3 6 x t + Δ t 4 24 1 α x 4 t Δ t + α x 4 t ,
x ˙ t = x ˙ t Δ t + Δ t x ¨ t Δ t + Δ t 2 2 x t Δ t + Δ t 3 6 1 β x 4 t Δ t + β x 4 t ,
x ¨ t = x ¨ t Δ t + Δ t x t Δ t + Δ t 2 2 1 γ x 4 t Δ t + γ x 4 t ,
x t = x t Δ t + Δ t 1 ζ x 4 t Δ t + ζ x 4 t
and
x ¨ t = g x ˙ t , x t , t
The matrix that determines the stability in this case, for a non-damped system, is:
A = 1 α + 4 β + 6 γ + 4 ζ 12 γ Δ t 2 ω 2 Δ t 1 α + 4 β + 4 ζ 12 γ Δ t 2 2 3 α + 4 β + 4 ζ 12 γ Δ t 3 3 4 α + 4 β + 4 ζ 24 γ Δ t 4 β 3 γ Δ t ω 2 1 1 β 3 γ Δ t 1 2 β 3 γ Δ t 2 1 6 β 6 γ Δ t 3 ω 2 0 0 0 0 2 ζ γ ω 2 Δ t 0 2 ζ γ 1 Δ t 1 2 ζ γ 1 ζ γ Δ t 2 γ ω 2 Δ t 2 0 2 γ 1 Δ t 2 2 γ 1 Δ t 1 1 γ .
In this case, it is more difficult to obtain parameters that allow for stability. It is easy to guess that the optimal case to reduce the error as much as possible is that where ζ = 1 2 , γ = 1 3 , β = 1 4 and α = 6 5 . Unfortunately, this configuration is unstable. The use of the parameters ζ = γ = β = 1 and α = 4 5 leads to a conditionally stable method.
In Figure 9, one can see that the stability area of the method is quite reduced when compared to RK4, but one has to take into account the number of evaluations of the equation to integrate when using RK4. The stability condition for this set is, approximately Δ t 0.6 ω . This is a quite competitive situation for this method, as it allows for third-order convergence along with a very small penalty in stability when compared to the central differences method. When this method is applied to the simple pendulum problem, it leads to the results presented in Table 6.
One can see how the method delivers similar results to that of the unstable configuration of the fourth-degree method with best third-order convergence, but this time, the method is conditionally stable. It is also noticeable how cancellation problems lead to the decrease in precision for time steps lower than 2 × 10−4, where the error reaches 4.32751 × 10−13, in a similar way to what happened in the other methods.
It is interesting to compare these results to those in the IFTOMM multibody benchmark. In this benchmark, the best result posted (posted by Francisco González) was programmed in C++ using the trapezoidal rule and run in an Intel Core-i7-4790 [email protected] GHz, which should exhibit similar performance to the one used in all of the tests presented here (AMD Ryzen 2600 @3.4 GHz). In this case, the total execution time was 0.0254, with an accuracy of 0.917 × 10−5. In the algorithm presented here, an accuracy of 6.71 × 10−11 was reached in 4 × 10−4 s. This is reasonable due to the fact that the trapezoidal rule is of a second-order accuracy. There is also another result in the benchmark of interest. Using Simbody, Luca Tagliapietra posted an accuracy of 4 × 10−11 with a total time of 0.637 s. Here, an error of 1.97 × 10−11 was obtained in 0.004 s.

6. Spinning Top Example

As stated before, the second-order central differences method and the method presented here are conditionally explicit. This means that they are explicit provided that the equation to be integrated can be represented in the form of Equation (77). In any case, it is interesting to obtain an insight into the performance of the method on problems in the more general form of Equation (1). The considered problem is a spinning top (Figure 10). The problem is modelized in Euler angles (ZXZ), and it is built on minimal coordinates, so the problem is reduced to an ODE.
In the example, the spinning top is rotating about its axis with a speed of 4 π   r a d / s , and the angle with respect to the z axis is π / 6   r a d . The moments of inertia on the principal axes measured in the origin of the global coordinate system are I x x = I y y = 5 × 10 5 Kgm2 and I z z = 2 × 10 4 Kgm2. The total mass of the spinning top is m = 2 × 10 2 Kg. The distance from the base of the spinning top to its center of gravity is 0.05   m , and gravity is set to 9.81   m / s 2 .
The total simulation time is 10 s. The EDO to be solved is, thus, the Euler equation:
d ω d o ˙ o ¨ = R o I ¯ g 1 R o T τ ω x R o I g R o T ω ω ˙ 0 o , o ˙ , 0 .
where ω is the angular velocity vector, o is the Euler angle’s vector, R o is the rotation matrix, I ¯ g is the diagonal tensor of inertia, ω ˙ 0 is the angular acceleration of the spinning top when o ¨ = 0 and τ is the moment vector introduced by the gravitational force. It holds that:
τ = l x m g = R o l l x m g .
where l l is the vector of the position of the center of gravity of the spinning top in the local frame of reference, which is: l l = 0 0 0.05 T .
In order to solve the problem with the method presented here, one needs to iteratively approach Equation (75). This is the main disadvantage of these methods when dealing with problems in the form of Equation (1) instead of Equation (79). The linearized form of Equation (75) can be written in the following form:
d ω d o ˙ o ¨ + C o ˙ = R o I ¯ g 1 R o T τ ω i x R o I g R o T ω i ω ˙ 0 o , o ˙ , 0 + C o ˙ i .
with:
C = R o I ¯ g 1 R o T ω i x R o I ¯ g R o T d ω d o ˙ R o I g R o T ω i x + d ω ˙ 0 d o ˙ .
The problem was solved using the fifth-degree method presented here. The computer’s CPU was an AMD Ryzen 2600. The clock speed was fixed at 3.4 GHz. For all the considered time steps, the times were averaged on 10 runs. Error was measured as the maximum fluctuation of the total mechanical energy in the simulation. The employed parameters were α = 1.62 , β = 1.03522 , γ = 1.0291293 and ζ = 1.016 . The results are as shown in Table 7.
As presented before, this is an example that is ill-suited for the method presented here. But, in any case, it handled the problem in a quite reasonably manner. As can be seen in the results (Table 7), the maximum precision was limited to about 1 × 10−13. If one plots the trajectory of the center of gravity of the spinning top as obtained by the algorithm, the results are as one would expect (see Figure 11). The gyroscopic effect is what keeps the spinning top from falling down.

7. Comparison with Runge–Kutta Methods

When speaking about ODEs, Runge–Kutta methods are among the most simple and versatile of the methods that one can find. The main disadvantage of Runge–Kutta methods is the fact that when they are applied to differential algebraic equations (DAEs), they require a reduction in the index of the DAE to be applied [17]. This leads to the drift phenomenon, which has to be dealt with using stabilization methods such as the Baumgarte [18] or projection methods [33], which are not exempt from issues [19]. One can also resort to the use of coordinate partitioning methods [3,34,35] or, whenever possible, the reformulation of the problem so that it can lead to an ODE (the use of minimal coordinate formulations [36]). Unfortunately, the last two alternatives are not exempt from problems.
On the other hand, the use of conditionally explicit methods such as the central differences method has shown that their application to DAEs does not incur additional stability issues apart from those that also appear in ODEs [10,14,15]. The methods proposed here are based on similar premises to those present in the central differences method and, thus, should present similar behavior. This is one advantage that they might present.
In order to carry out a comparison, the spinning top problem was programmed using a third-order Runge–Kutta method. The third-order method was selected to keep the same theoretical order of the integrator. One must take into account that the Runge–Kutta approach is explicit, while the conditionally explicit method is, in this case, implicit. The obtained results, using the same computer and under the same conditions used for the conditionally explicit method, are presented in Table 8.
The first thing that leads to a surprise is that the time results for the method presented here were quite competitive for the smaller time steps. This was due to the fact that for the smaller time steps, the conditionally explicit method has to perform a small number of iterations. As for the precision, the conditionally explicit method showed a slightly better convergence, apparently at the cost of an earlier precision limitation. Taking into account that the conditionally explicit method is implicit for this case, the results were quite competitive, but one should make no mistake, in that the explicit method requires derivatives, which were analytically obtained for this case. This requires some additional work that the explicit RK3 method does not need. This is a disadvantage that does not appear when dealing with ODEs where the conditionally explicit method behaves as an explicit one. In any case, one can see that the conditionally explicit method did not lag a lot behind the Runge–Kutta approach. One also has to take into account the fact that these methods require several evaluations of the error function per step, which severely increases the cost. The method presented here also might require more than one resolution of the system of equations, but only when the equation to integrate depends on x ˙ .

8. Application to DAEs

Probably the most interesting potential use of the algorithm lies in its application to DAEs. This is because, as happens when using the central differences method, the method needs no index reduction, thus avoiding the drift without any correction. In order to apply the method to DAEs, Equation (13) can be used, so one can introduce restrictions in terms of the variable but expressed in accelerations. This will lead to a vector x t + Δ t verifying the coordinate restrictions. For example, let us consider a restriction in the following form:
q x = 0 ,
which is a restriction in terms of displacements. By introducing Equation (13) into Equation (79), one expresses this equation in terms of x ¨ t but enforcing Equation (79).
In order to compare both methods, the same pendulum problem stated before is solved in this section, but, in this case, the equilibrium equations are formulated in the center of gravity of the pendulum, which means that the R joint will require two restrictions. The coordinate system is presented in Figure 12. The pendulum starts from a horizontal position, i.e., θ = 90 ° , and it is let free. Thus, this is a problem with energy conservation.
The system of equations to be solved is, thus, composed of:
m 0 0 0 m 0 0 0 I z z x ¨ y ¨ z ¨ = 0 m g 0 1 0 0 1 L c o s θ L s e n θ λ 1 λ 2 ,
x L s e n θ = 0 ,
y + L c o s θ = 0 .
The reduction in the index of this DAE system (required for the RK3 method) leads to:
m 0 0 1 0 0 m 0 0 1 0 0 I z z L c o s θ L s e n θ 1 0 L c o s θ 0 0 0 1 L s e n θ 0 0 x ¨ y ¨ θ ¨ λ 1 λ 2 = 0 m g 0 θ ˙ 2 L s e n θ θ ˙ 2 L c o s θ .
If one solves the system without stabilization, one obtains the results presented in Table 9.
Here, the drift was measured as the assembly error; this is the squared length of the truss as defined by the pendulum coordinates minus the squared original length L 2 L 0 2 .
In order to remove the drift, a simple coordinate projection method was implemented. The results are indicated in Table 10.
In order to solve the problem using one of the methods formulated here, a linearization of the restrictions is needed. One can write the following for the equilibrium equation:
m 0 0 1 0 0 m 0 0 1 0 0 I z z L c o s θ t L s e n θ t x ¨ y ¨ θ ¨ λ 1 λ 2 = 0 m g 0 = M G T x ¨ t λ = f .
And the linearized restrictions are:
1 0 L c o s θ i 0 1 L s e n θ i x t + Δ t y t + Δ t θ t + Δ t = L s e n θ i L θ i c o s θ i L c o s θ i θ i L s e n θ i = H x t + Δ t = d .
with θ i being the current approximation of θ t + Δ t . Now, one can use Equations (69)–(73) to write x t + Δ t and x ¨ t in the following form:
x t + Δ t = b 0 + β 0 x 4 t ,
and
x ¨ t = b 2 + β 2 x 4 t .
This allows one to write the following:
M G T b 2 + β 2 x 4 t λ = f M G T x 4 t 1 β 2 λ = 1 β 2 f M b 2 .
and
H b 0 + β 0 x 4 t = d H x 4 t = 1 β 0 d H b 0 .
Grouping the equations gives:
M G T H 0 x 4 t 1 β 2 λ t = 1 β 2 f M b 2 1 β 0 d H b 0 .
One could also take advantage of the fact that Equation (88) does not change along the iterative process of solving the step. This would allow for the use of a fundamental basis of the null space of the system to considerably reduce the computational cost. But, for the shake of simplicity, the whole system was solved in each iteration.
The iterative resolution of Equation (90) allows one to solve x 4 t , which, in turn, allows one to obtain the position and its derivatives. It is important to not introduce the convergence criteria using x 4 t because its contribution can be so small that very small error values should be used. The use of the variation in x t + Δ t to measure the error is recommended. With all of these in mind, one can reach the results shown in Table 11. All the parameters were set to 1.
The results for the method presented here were quite good. It obtained similar results to the stabilized RK3 method in terms of the coordinate drift but with much better error results and by using less than half of the time. The RK3 results, in time, could have been improved by using a symmetric matrix approach, but this would have as much halved the computational times. On the other hand, as stated before, one could also reduce the computational cost of the method presented here by a considerable amount. Finally, the problem size should benefit the conditionally explicit method because of the reduction in the number of error function evaluations.

9. Conclusions and Future Work

A new family of conditionally explicit integrators for second-order ODEs order were introduced in this paper. This has led to configurable methods, which might allow for existing code to obtain additional convergence in some cases with little effort.
A fourth-degree configurable method was proposed. A couple of parameter sets for this method were considered. One of them led to great convergence but with compromised stability. The other set increased the convergence when compared to the third-degree conditionally explicit methods with a quite moderate reduction in the range of stability.
Finally, a fifth-degree method was presented, along with a set of parameters that allow for a conditionally stable integration of third-order convergence, leading to only about half the time step limit required for the central differences method.
The method presented here is of special interest for the case of differential algebraic equation systems. It requires no drift stabilization, and it shows good performance. This is an interesting practical advantage.
Taking into account that the method can be considered a generalization of the central differences method, it might allow one to reuse code developed with the central differences approach to improve on convergence with a moderate cost.
In order to check the behavior of the methods, some simple examples were presented. These belong both to the structural dynamics and the multibody dynamics fields, thus allowing one to obtain an idea of the performance of the methods in different circumstances.
In future work, a deeper analysis of different parameter sets should be performed. The study of even-higher-degree methods is also of interest. Obviously, the application of these methods to different areas should be studied, along with more complex problems in multibody dynamics and structural dynamics. Finally, a global convergence analysis for the higher-degree methods is of interest.

Author Contributions

All authors have contributed to all of the tasks required to develop this work in similar proportions. All authors have read and agreed to the published version of the manuscript.

Funding

The authors would like to thank to the Basque Government for its funding to the research group, recognized under section IT1542-22. The authors also specially thank the grant PID2021-124677NB-I00 funded by MCIN/AEI/10.13039/501100011033 and by “ERDF A way of making Europe”.

Data Availability Statement

The datasets generated and/or analyzed during the current study are available from the corresponding author at a reasonable request.

Acknowledgments

We want to thank the anonymous reviewers who helped us a lot in improving this work.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Hairer, E.; Nørsett, S.P.; Wanner, G. Solving Ordinary Differential Equations I. Nonstiff Problems (Springer Series in Computational Mathematics, Vol. 8). In Mathematics and Computers in Simulation; Springer: Berlin/Heidelberg, Germany, 1987. [Google Scholar]
  2. Avilés, R. Métodos de Análisis Para Diseño Mecánico: Diseño Mecánico, Análisis Estático, Elementos Finitos En Estática, Elementos Finitos En Dinámica, Análisis de Fatiga; Escuela Superior de Ingenieros: Bilbao, Spain, 2003; ISBN 84-95809-09-5. [Google Scholar]
  3. García de Jalon, J.; Bayo, E. Kinematic and Dynamic Simulation of Multibody Systemas; Springer: New York, NY, USA, 1994; ISBN 978-1-4612-7601-2. [Google Scholar]
  4. Bauchau, O.A. Flexible Multibody Dynamics; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2010; Volume 176, ISBN 94-007-0335-X. [Google Scholar]
  5. Zienkiewicz, O.C.; Taylor, R.L.; Fox, D. The Finite Element Method for Solid and Structural Mechanics, 7th ed.; Elsevier: Amsterdam, The Netherlands, 2013; pp. 1–624. [Google Scholar] [CrossRef]
  6. Newmark, N.M. A Method of Computation for Structural Dynamics. J. Eng. Mech. Div. 1959, 85, 67–94. [Google Scholar] [CrossRef]
  7. Hilber, H.M.; Hughes, T.J.R.; Taylor, R.L. Improved Numerical Dissipation for Time Integration Algorithms in Structural Dynamics. Earthq. Eng. Struct. Dyn. 1977, 5, 283–292. [Google Scholar] [CrossRef]
  8. Wilson, E.L. A Computer Program for the Dynamic Stress Analysis of Underground Structures. Struct. Eng. Mech. Mater. 1968. [Google Scholar]
  9. Shabana, A.A. Dynamics of Multibody Systems; Cambridge University Press: Cambridge, UK, 2013; ISBN 978-1-107-33721-3. [Google Scholar]
  10. Fernández de Bustos, I.; Uriarte, H.; Urkullu, G.; García-Marina, V. A Non-Damped Stabilization Algorithm for Multibody Dynamics. Meccanica 2022, 57, 371–399. [Google Scholar] [CrossRef]
  11. Gavrea, B.; Negrut, D.; Potra, F.A. The Newmark Integration Method for Simulation of Multibody Systems: Analytical Considerations. In Proceedings of the ASME 2005 International Mechanical Engineering Congress and Exposition, Orlando, FL, USA, 5–11 November 2005. [Google Scholar]
  12. Negrut, D.; Rampalli, R.; Ottarsson, G.; Sajdak, A. On an Implementation of the Hilber-Hughes-Taylor Method in the Context of Index 3 Differential-Algebraic Equations of Multibody Dynamics (DETC2005-85096). J. Comput. Nonlinear Dyn. 2007, 2, 73–85. [Google Scholar] [CrossRef]
  13. Dopico, D.; Lugris, U.; Gonzalez, M.; Cuadrado, J. IRK vs Structural Integrators for Real-Time Applications in MBS. J. Mech. Sci. Technol. 2005, 19, 388–394. [Google Scholar] [CrossRef]
  14. Urkullu, G.; de Bustos, I.F.; García-Marina, V.; Uriarte, H. Direct Integration of the Equations of Multibody Dynamics Using Central Differences and Linearization. Mech. Mach. Theory 2019, 133, 432–458. [Google Scholar] [CrossRef]
  15. Urkullu, G.; Fernández-de-Bustos, I.; Olabarrieta, A.; Ansola, R. Estudio de La Eficiencia Del Método de Integración Directa Mediante Diferencias Centrales (DIMCD). DYNA-Ing. Ind. 2021, 96, 512. [Google Scholar]
  16. Kovalnogov, V.N.; Fedorov, R.V.; Karpukhina, T.V.; Simos, T.E.; Tsitouras, C. Runge–Kutta Embedded Methods of Orders 8(7) for Use in Quadruple Precision Computations. Mathematics 2022, 10, 3247. [Google Scholar] [CrossRef]
  17. Petzold, L. Differential/Algebraic Equations Are Not ODE’ s. SIAM J. Sci. Stat. Comput. 1982, 3, 367–384. [Google Scholar] [CrossRef]
  18. Baumgarte, J. Stabilization of Constraints and Integrals of Motion in Dynamical Systems. Comput. Methods Appl. Mech. Eng. 1972, 1, 1–16. [Google Scholar] [CrossRef]
  19. Flores, P.; Pereira, R.; Machado, M.; Seabra, E. Investigation on the Baumgarte Stabilization Method for Dynamic Analysis of Constrained Multibody Systems. In Proceedings of the EUCOMES 2008—The 2nd European Conference on Mechanism Science, Cassino, Italy, 17–20 September 2008; pp. 305–312. [Google Scholar] [CrossRef]
  20. Fernández de Bustos, I.; Urkullu, G.; García Marina, V.; Ansola, R. Optimization of Planar Mechanisms by Using a Minimum Distance Function. Mech. Mach. Theory 2019, 138, 149–168. [Google Scholar] [CrossRef]
  21. Verlet, L. Computer “Experiments” on Classical Fluids. I. Thermodynamical Properties of Lennard−Jones Molecules. Phys. Rev. 1967, 159, 98–103. [Google Scholar] [CrossRef]
  22. Hairer, E.; Lubich, C.; Wanner, G. Geometric Numerical Integration Illustrated by the Störmer/Verlet Method. Acta Numer. 2003, 12, 399–450. [Google Scholar] [CrossRef]
  23. Press, W.H.; Teukolsky, S.A.; Vetterling, T.W.; Flannery, B.P. Numerical Recipes: The Art of Scientific Computing; Cambridge University Press: New York, NY, USA, 2007; ISBN 978-0-521-88068-8. [Google Scholar]
  24. Birdsall, C.K.; Langdon, A.B. Plasma Physics via Computer Simulation; CRC Press: Boca Raton, FL, USA, 1991; ISBN 978-1-315-27504-8. [Google Scholar]
  25. Yoshida, H. Construction of Higher Order Symplectic Integrators. Phys. Lett. 1990, 150, 262–268. [Google Scholar] [CrossRef]
  26. Beeman, D. Some Multistep Methods for Use in Molecular Dynamics Calculations. J. Comput. Phys. 1976, 20, 130–139. [Google Scholar] [CrossRef]
  27. Hairer, E.; Wanner, G. Solving Ordinary Differential Equations II; Springer Series in Computational Mathematics; Springer: Berlin/Heidelberg, Germany, 1991; Volume 14, ISBN 978-3-662-09949-0. [Google Scholar]
  28. Park, K.C.; Underwood, P.G. A Variable-Step Central Difference Method for Structural Dynamics Analysis-Part I. Theoretical Aspects: Park, K C and Underwood, P G Comput. Methods and Appl. Mechan. Engng Vol 22 No 2 (May 1980) pp 241–258. Comput.-Aided Des. 1981, 13, 51. [Google Scholar] [CrossRef]
  29. Kim, W. Higher-Order Explicit Time Integration Methods for Numerical Analyses of Structural Dynamics. Lat. Am. J. Solids Struct. 2019, 16, e201. [Google Scholar] [CrossRef]
  30. Uriarte, H. Integration of the Equations of Multibody Dynamics Using Structural Methods. Ph.D. Thesis, University of the Basque Country, Bilbao, Spain, 2023. [Google Scholar]
  31. Fernández de Bustos, I.; Uriarte, H.; Coria, I.; Urkullu, G. A New Approach to Formulate Structural Methods for Multibody Dynamics. In Proceedings of the ECCOMAS Thematic Conference on Multibody Dynamics, Lisbon, Portugal, 24–28 July 2023. [Google Scholar]
  32. Urkullu, G.; de Bustos, I.F.; Coria, I.; Uriarte, H. Explicit Higher-Order Integrator for Multibody Dynamics. In Advances in Mechanism and Machine Science; Okada, M., Ed.; Springer Nature: Cham, Switzerland, 2024; pp. 593–604. [Google Scholar]
  33. Eich, E. Convergence Results for a Coordinate Projection Method Applied to Mechanical Systems with Algebraic Constraints. SIAM J. Numer. Anal. 1993, 30, 1467–1482. [Google Scholar] [CrossRef]
  34. Haug, E.J. Computer Aided Kinematics and Dynamics of Mechanical Systems; Allyn and Bacon Boston: Boston, MA, USA, 1989; Volume 1. [Google Scholar]
  35. Wehage, R.A.; Haug, E.J. Generalized Coordinate Partitioning for Dimension Reduction in Analysis of Constrained Dynamic Systems. J. Mech. Des. 1982, 104, 247–255. [Google Scholar] [CrossRef]
  36. Hiller, M.; Kecskeméthy, A. Dynamics of Multibody Systems with Minimal Coordinates. In Computer-Aided Analysis of Rigid and Flexible Mechanical Systems; Springer: Dordrecht, The Netherlands, 1994; pp. 61–100. [Google Scholar] [CrossRef]
Figure 1. A simple planar pendulum.
Figure 1. A simple planar pendulum.
Mathematics 12 02862 g001
Figure 2. A simple single-DOF vibration system.
Figure 2. A simple single-DOF vibration system.
Mathematics 12 02862 g002
Figure 3. Spectral radius of three configurations of the third-degree method vs. Ω = ω Δ t .
Figure 3. Spectral radius of three configurations of the third-degree method vs. Ω = ω Δ t .
Mathematics 12 02862 g003
Figure 4. Spectral radius of various methods vs. Ω = ω Δ t . AB4: Adams–Bashforth, fourth-order; AB5: Adams–Bashforth, fifth-order; HHT method, third-order RK; fourth-order RK; and the third-degree method in the central differences configuration (G3 (CD)).
Figure 4. Spectral radius of various methods vs. Ω = ω Δ t . AB4: Adams–Bashforth, fourth-order; AB5: Adams–Bashforth, fifth-order; HHT method, third-order RK; fourth-order RK; and the third-degree method in the central differences configuration (G3 (CD)).
Mathematics 12 02862 g004
Figure 5. Unstable result ( α = 1 , β = 0.5 and Δ t = 2.1 ).
Figure 5. Unstable result ( α = 1 , β = 0.5 and Δ t = 2.1 ).
Mathematics 12 02862 g005
Figure 6. Stable result ( α = 1 , β = 0.5 and Δ t = 2.0 ).
Figure 6. Stable result ( α = 1 , β = 0.5 and Δ t = 2.0 ).
Mathematics 12 02862 g006
Figure 7. Spectral radius of stability matrix vs. Δ t ω for the fourth-degree method in two configurations. Also included are the spectral radii of the fourth-order Adams–Bashforth and third-order Runge–Kutta methods.
Figure 7. Spectral radius of stability matrix vs. Δ t ω for the fourth-degree method in two configurations. Also included are the spectral radii of the fourth-order Adams–Bashforth and third-order Runge–Kutta methods.
Mathematics 12 02862 g007
Figure 8. Results for the one-DOF model with α = 5 / 4 and time steps of 0.1 (top) and 0.01 (bottom).
Figure 8. Results for the one-DOF model with α = 5 / 4 and time steps of 0.1 (top) and 0.01 (bottom).
Mathematics 12 02862 g008
Figure 9. Spectral radius of stability matrix vs. Δ t ω for the fifth-degree method with parameters α = 4 / 5 and ζ = γ = β = 1 .
Figure 9. Spectral radius of stability matrix vs. Δ t ω for the fifth-degree method with parameters α = 4 / 5 and ζ = γ = β = 1 .
Mathematics 12 02862 g009
Figure 10. Spinning top.
Figure 10. Spinning top.
Mathematics 12 02862 g010
Figure 11. Spinning top trajectory as seen from above.
Figure 11. Spinning top trajectory as seen from above.
Mathematics 12 02862 g011
Figure 12. Coordinate system for the DAE resolution of the pendulum problem.
Figure 12. Coordinate system for the DAE resolution of the pendulum problem.
Mathematics 12 02862 g012
Table 1. Energy drift (error) obtained in the simple planar pendulum.
Table 1. Energy drift (error) obtained in the simple planar pendulum.
Step SizeError, Central DifferencesError, Reformulated CD
1 × 10−32.00492 × 10−052.00492 × 10−05
1 × 10−42.00597 × 10−072.00492 × 10−7
1 × 10−52.46675 × 10−082.00537 × 10−09
1 × 10−63.66886 × 10−071.90425 × 10−11
1 × 10−70.0006227661.18776 × 10−11
1 × 10−80.2704863.17293 × 10−11
Table 2. Error (energy drift) for various configurations of the modified central differences approach.
Table 2. Error (energy drift) for various configurations of the modified central differences approach.
Step Size Error ,   α = 1 Error ,   α = 4 / 3 Error ,   α = 2
1 × 10−32.00492 × 10−51.27955 × 10−53.85689 × 10−6
1 × 10−42.00492 × 10−71.21061 × 10−73.99453 × 10−8
1 × 10−52.00537 × 10−91.20584 × 10−94.00588 × 10−10
1 × 10−61.90425 × 10−111.07274 × 10−111.00879 × 10−11
1 × 10−71.18776 × 10−111.17861 × 10−111.1793 × 10−11
1 × 10−83.17293 × 10−113.15987 × 10−113.15623 × 10−11
Table 3. Stability limit for different configuration parameters.
Table 3. Stability limit for different configuration parameters.
α β Ω = ω Δ t m a x
10.52
1.33330.51.549
20.51.154
Table 4. Results obtained with the fourth-degree method.
Table 4. Results obtained with the fourth-degree method.
Step Size Error ,   α = 5 / 4 Error ,   α = 1 / 4 Error ,   α = 3 / 4
1 × 10−35.54063 × 10−118.67265 × 10−74.3364 × 10−07
1 × 10−45.66297 × 10−138.6753 × 10−104.33685 × 10−10
1 × 10−55.66297 × 10−131.02673 × 10−127.28306 × 10−13
1 × 10−63.32889 × 10−123.42482 × 10−123.32889 × 10−12
1 × 10−71.1898 × 10−111.1898 × 10−111.1898 × 10−11
Table 5. Some experimental results using α = 5 / 4 .
Table 5. Some experimental results using α = 5 / 4 .
Δ t ω Spectral RadiiApproximate Point of Appreciation of Instability
0.11.0033389500
0.011.000033333895000
0.0011.00000033333338950,000
Table 6. Results for the fifth-degree method with α = 4 / 5 and ζ = γ = β = 1 .
Table 6. Results for the fifth-degree method with α = 4 / 5 and ζ = γ = β = 1 .
Step Size1.00 × 10−21.00 × 10−31.00 × 10−41.00 × 10−51.00 × 10−6
Error9.05 × 10−76.71 × 10−111.97 × 10−111.78 × 10−111.88 × 10−11
Cost (s)0.00004477360.00040260530.0040196630.040064170.3831238
Table 7. Spinning top results.
Table 7. Spinning top results.
Step Size1.0000 × 10−21.0000 × 10−31.0000 × 10−41.0000 × 10−5
Error2.2800 × 10−53.0949 × 10−93.5092 × 10−133.1351 × 10−13
Time 4.6468 × 10−32.3474 × 10−21.2635 × 10−11.1481 × 10+0
Table 8. Results for the spinning top problem using an RK3 method.
Table 8. Results for the spinning top problem using an RK3 method.
Step Size1.0000 × 10−21.0000 × 10−31.0000 × 10−41.0000 × 10−5
Error8.7900 × 10−62.8800 × 10−82.8900 × 10−113.17 × 10−14
Time 1.5357 × 10−31.3392 × 10−21.3221 × 10−11.2096 × 10+0
Table 9. Results for the DAE pendulum using an RK3 method without drift stabilization.
Table 9. Results for the DAE pendulum using an RK3 method without drift stabilization.
Step Size1.0000 × 10−21.0000 × 10−31.0000 × 10−41.0000 × 10−51.0000 × 10−6
Error5.0500 × 10−53.7400 × 10−83.7300 × 10−112.31 × 10−116.99 × 10−11
Coordinate drift1.0474 × 10−41.0300 × 10−71.0200 × 10−105.2700 × 10−121.9300 × 10−11
Time 0.0012206611.1277 × 10−21.0478 × 10−19.9668 × 10−19.7614 × 10+0
Table 10. Results for the DAE pendulum using an RK3 method with stabilization (projection).
Table 10. Results for the DAE pendulum using an RK3 method with stabilization (projection).
Step Size1.00 × 10−021.00 × 10−031.00 × 10−041.00 × 10−051.00 × 10−06
Error4.16 × 10−044.1400 × 10−074.1400 × 10−105.54 × 10−125.09 × 10−12
Coordinate drift4.4400 × 10−164.4400 × 10−163.3300 × 10−164.4400 × 10−164.4400 × 10−16
Time 0.0012750830.010326171.0442 × 10−011.0375 × 10+001.0324 × 10+01
Table 11. Results for the DAE pendulum using a fifth-degree conditionally explicit method.
Table 11. Results for the DAE pendulum using a fifth-degree conditionally explicit method.
Step Size1.00 × 10−021.00 × 10−031.00 × 10−041.00 × 10−051.00 × 10−06
Error1.83 × 10−059.7800 × 10−107.0700 × 10−132.63 × 10−124.01 × 10−12
Coordinate drift8.1600 × 10−138.8800 × 10−168.8800 × 10−161.1100 × 10−151.1100 × 10−15
Time 0.00043398790.0041339464.1554 × 10−024.1675 × 10−014.1587 × 10+00
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fernández de Bustos, I.; Uriarte, H.; Urkullu, G.; Coria, I. A Family of Conditionally Explicit Methods for Second-Order ODEs and DAEs: Application in Multibody Dynamics. Mathematics 2024, 12, 2862. https://doi.org/10.3390/math12182862

AMA Style

Fernández de Bustos I, Uriarte H, Urkullu G, Coria I. A Family of Conditionally Explicit Methods for Second-Order ODEs and DAEs: Application in Multibody Dynamics. Mathematics. 2024; 12(18):2862. https://doi.org/10.3390/math12182862

Chicago/Turabian Style

Fernández de Bustos, Igor, Haritz Uriarte, Gorka Urkullu, and Ibai Coria. 2024. "A Family of Conditionally Explicit Methods for Second-Order ODEs and DAEs: Application in Multibody Dynamics" Mathematics 12, no. 18: 2862. https://doi.org/10.3390/math12182862

APA Style

Fernández de Bustos, I., Uriarte, H., Urkullu, G., & Coria, I. (2024). A Family of Conditionally Explicit Methods for Second-Order ODEs and DAEs: Application in Multibody Dynamics. Mathematics, 12(18), 2862. https://doi.org/10.3390/math12182862

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop