Next Article in Journal
Shape Preserving Properties of Parametric Szász Type Operators on Unbounded Intervals
Next Article in Special Issue
Assessment of Stochastic Numerical Schemes for Stochastic Differential Equations with “White Noise” Using Itô’s Integral
Previous Article in Journal
Bayes Estimation for the Rayleigh–Weibull Distribution Based on Progressive Type-II Censored Samples for Cancer Data in Medicine
Previous Article in Special Issue
Symmetry in a Fractional-Order Multi-Scroll Chaotic System Using the Extended Caputo Operator
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Numerical Solution of Nonlinear Fractional Lienard and Duffing Equations Using Orthogonal Perceptron

by
Akanksha Verma
1,
Wojciech Sumelka
2,* and
Pramod Kumar Yadav
3
1
Department of Mathematics, Dyal Singh College, University of Delhi, New Delhi 110003, India
2
Institute of Structural Analysis, Poznan University of Technology, Piotrowo 5 Street, 60-965 Poznan, Poland
3
Department of Mathematics, Motilal Nehru National Institute of Technology Allahabad, Prayagraj 211004, India
*
Author to whom correspondence should be addressed.
Symmetry 2023, 15(9), 1753; https://doi.org/10.3390/sym15091753
Submission received: 20 July 2023 / Revised: 29 August 2023 / Accepted: 6 September 2023 / Published: 13 September 2023
(This article belongs to the Special Issue Symmetry in Nonlinear Dynamics and Chaos II)

Abstract

:
This paper proposes an approximation algorithm based on the Legendre and Chebyshev artificial neural network to explore the approximate solution of fractional Lienard and Duffing equations with a Caputo fractional derivative. These equations show the oscillating circuit and generalize the spring–mass device equation. The proposed approach transforms the given nonlinear fractional differential equation (FDE) into an unconstrained minimization problem. The simulated annealing (SA) algorithm minimizes the mean square error. The proposed techniques examine various non-integer order problems to verify the theoretical results. The numerical results show that the proposed approach yields better results than existing methods.

1. Introduction

In ancient times, fractional calculus was used by mathematicians due to its several applications in applied mathematics as well as mathematical physics. Recently, fractional differential equations have been used to model many real-world problems in circuit theory, fluid dynamics, physics, mathematical biology, quantum mechanics, electrochemistry, etc. Also, it is well known that non-integer-order derivatives control the models efficiently. Talebi et al. [1] explored the application of fractional calculus to filtering structures for α -stable systems, where α -stable distributions are a class of probability distributions that generalize the Gaussian distribution and can describe asymmetric and heavy-tailed behavior. These distributions are encountered in real-world scenarios, including financial time series and communication channels. Fractional-order filters and processing methods might provide better tools for dealing with such systems. Therefore, studying these equations and finding their solutions is necessary. The general form of the Lienard equation [2] is given by
z ( t ) + f ( z ) z ( t ) + g ( z ) = h ( t ) .
Various types of selection of the functions f, g, and h give distinct models. For example, if f ( z ) z ( t ) is the damping force, g ( z ) is the restoring force, and h ( t ) is the external force, then it forms the damped pendulum equation. Moreover, if we have f ( z ) = ϵ ( z 2 1 ) , g ( z ) = z , and h ( t ) = 0 , then Equation (1) is transferred to the Van der Pol equation [3], representing a nonlinear electronic oscillation model. However, it is well known that the exact solution of Equation (1) is a complex problem.
Kong [4] and Feng [5] investigated the exact solution of the Lienard equation in the form
z ( t ) + L z ( t ) + M z 3 + N z 5 = 0 ,
where, L , M , and N are constant coefficients.
  • The general form of the Duffing oscillator equation is
z ( t ) + L z ( t ) + P z + M z 3 = 0 ,
where, L , M , and P are constant coefficients.
Some recent works in the literature, such as [6,7,8,9], focused on generalized forms of the Lienard and Duffing equations using fractional calculus. Fractional-order derivatives can explore various physical methods that vary with time and space [10,11]. Also, using fractional calculus principles is well established in several scientific fields. Bohner and Tunç [12] conducted a qualitative analysis of Caputo fractional integro-differential equations with constant delays. This work delves into the dynamics of such equations, shedding light on their behavior and properties and contributing to the advancement of their understanding in this specialized area of mathematics. In [13], the authors dived into fractional calculus and delay integro-differential equations. Their work, which focuses on Caputo proportional fractional derivatives, presents novel solution estimation techniques. By addressing these intricate equations, the authors contributed to advancing analytical methods in the context of fractional calculus and its applications. Many real-life phenomena are represented by the fractional Lienard equation and Duffing equation, such as oscillating circuit theory [14,15], the mass damping effect [16], and pipelines and fluid dynamics [17].
The general form of the fractional order Lienard equation is given as follows:
D ν z ( t ) + L D z ( t ) + M z 3 + N z 5 = 0 , 1 < ν 2 , t [ 0 , 1 ] ,
with respect to
z ( 0 ) = α , z ( 0 ) = β , where α and β are constants .
Also, the fractional Duffing equation with the damping effect is given as follows:
D η z ( t ) + L D z ( t ) + P z ( t ) + M z 3 = 0 , 1 < η 2 , t [ 0 , 1 ] ,
subject to
z ( 0 ) = γ z ( 0 ) = δ , where γ and δ are constants .
In the literature, many analytical and numerical approaches exist for solving Equations (4) and (6). In 2004, Feng [18] explicitly presented the exact solution of the Lienard equation and provided some applications. In 2008, Matinfar et al. [19] used the variation iteration technique to solve the Lienard equation and compared the numerical solutions obtained with the analytic solution. Furthermore, Xu [20] acquired the eight types of explicit analytical solutions of the Lienard equation, which included periodic wave solutions and solitary wave solutions in terms of elliptic Jacobian and trigonometric functions. Janiczek [21] demonstrated the modulating functions method for all models described by fractional differential equations. Modulating functions are used to reduce the order of derivatives in an equation, generate equations without derivatives of output signals, and eliminate the need to solve differential equations. Chebyshev’s operational matrix method for solving the multi-term fractional-order ordinary differential equation was proposed by Atabakzadeh et al. [22]. To apply this approach, they first converted the given problem into a system of fractional ODEs and then used the Chebyshev operational matrix method. Again, Kazem [23] analyzed fractional ODEs via an integral operational matrix approach based on Jacobi polynomials. Nourazar and Mirzabeigy [16] proposed a modified differential transform technique to deal with the fractional Duffing equation with a damping effect. In 2016, Ezz-Eldien [24] discovered a new numerical approach to solving fractional variational problems. Furthermore, Gómez-Aguilar et al. [17] used the Laplace homotopy analysis technique with a new fractional derivative without a singular kernel to solve the fractional Lienard equation that describes the fluid dynamics of the pipeline. The homotopy analysis method was implemented in [25,26] to solve the Duffing oscillator and the Lienard problem with a fractional derivative. Recently, Singh et al. [14,15,27,28] also made several effective attempts to find the solutions to Equations (4) and (6) by using a variety of techniques. Also, Kumar et al. [29] used the Rabotnov fractional exponential kernel to solve the nonlinear Lienard equation numerically. More recently, Adel [30] demonstrated an approach based on Bernoulli collocation and shifted Chebyshev collocation points to solve Equations (4) and (6).
In recent years, neural architecture-based approximation schemes have been used to solve FDEs, ODEs, PDEs, and delay differential equations (DDEs) [31,32,33,34,35,36,37,38]. In 2013, Lefik [39] illustrated that an ANN performs the numerical representation of the inverse relation. It can be used as many times as needed in the same application, replacing traditional “ad hoc” back computation for any new piece of experimental data. Malik et al. [40] proposed a hybrid heuristic approach to solve the Lienard equation based on genetic algorithms, such as memetic computation, combining genetic algorithms, the interior-point algorithm, and the active set algorithm. Furthermore, Mall and Chakraverty [33,36] used the multilayer perceptron and functional connection neural network with regression-based parameters to solve ODEs. In [34,38], the authors also used the multilayer perceptron technique with quasi-Newton and Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithms to solve the singular initial and boundary value problems. Kumar et al. [41] presented a comparative analysis of two distinct neural modeling approaches to approximate the multidimensional poverty levels within an Indian state. This study sought to provide useful information for choosing the best modeling approaches to determine poverty levels in the Indian setting by examining their performance and precision. Sahoo and Chakraverty [42] proposed a symplectic artificial neural network to handle nonlinear systems arising in dusty plasma models. They presented the dynamics of Van der Pol–Mathieu–Duffing oscillator problems for different excitation functions using the proposed method, and numerical simulations and graphical representations were carried out to establish the accuracy of the presented algorithm.
Motivated by the above, in this manuscript, we discuss the functional link neural network architecture, which is a single-layer neural network. This article aims to find the solutions to fractional Lienard and Duffing equations using functional link neural networks. This technique offers us the following attractive features:
  • The proposed technique gives us the solution in a closed analytic form.
  • The functional link neural network consists of a single layer, and thus the number of network parameters is less than the traditional multilayer ANN and works with low computational complexity.
  • It is capable of fast learning and is computationally efficient.
  • This process does not need linearization to solve a nonlinear problem.
We have organized the present article as follows. Section 2 includes some important preliminaries and discusses the structure of the Chebyshev and Legendre neural networks. Section 3 discusses the methodology, including a well-explained algorithm and the implication protocol, while Section 4 discusses the numerical experiments and their results. Section 5 deals with the error analysis of the technique, while Section 6 concludes the work.

2. Preliminaries

This section provides some basic definitions and results related to the Chebyshev and Legendre artificial neural network models used in this paper. We begin with the following definitions:
Definition 1. 
Suppose that r and s are positive integers such that r 1 < s < r . Then, for a continuous function p ( x ) , the Riemann–Liouville derivative and integral of a fractional order s are given by
a R D x s p ( x ) = 1 Γ r s d r d x r a x ( x τ ) r s 1 p ( τ ) d τ ,
and
a R D x s p ( x ) = 1 Γ s a x ( x τ ) s 1 p ( τ ) d τ ,
respectively.
Definition 2. 
Suppose that r and s are positive integers such that r 1 < s < r . Then, for a continuous function p ( x ) , the Caputo derivative and integral of a fractional order s are given by
a C D x s p ( x ) = 1 Γ ( r s ) a x p ( r ) ( τ ) ( x τ ) s + 1 r d τ , r 1 s r ,
and
a C D x s p ( x ) = 1 Γ s a x ( x τ ) s 1 p ( τ ) d τ ,
respectively.
Remark 1. 
Note that the Caputo definition has the advantage of being bound, which means that the derivative of a constant is equal to zero. It also allows for the consideration of easily understood initial conditions.

2.1. Chebyshev and Legendre Artificial Neural Network Models

Chebyshev and Legendre artificial neural networks are functional link neural networks with one single layer. Both models can learn fast and have functional expansion blocks to enhance input patterns. In 1995, Pao and Philips [43] presented an approach based on a single-layer functional connection neural network. The architectures of the Chebyshev and Legendre neural network models are given as follows.
The architecture of the Chebyshev neural network model: The structure of the ChNN model is shown in Figure 1. It consists of a single input node, a Chebyshev polynomial function expansion block, and an output node. Assume that t is a single input node with k data; that is, ( t = t 1 , t 2 , , t k ) T . N ( t , p ) is the output of the feedforward neural network.
The following are the first two Chebyshev polynomials:
T 0 [ t ] = 1 ,
T 1 [ t ] = t .
The well-known recurrence relation can be used to obtain higher-order Chebyshev polynomials:
T n + 1 ( t ) = 2 t T n ( t ) T n 1 ( t ) .
The architecture of the Legendre neural network model: The structure of the LeNN model is depicted in Figure 2. It has a single input node, a Legendre polynomial function expansion block, and an output node. Assume that t is a single input node with l data; that is, ( t = t 1 , t 2 , , t l ) T . N ( t , p ) is the output of the feedforward neural network.
Some Legendre polynomials are as follows:
L 0 [ t ] = 1 ,
L 1 [ t ] = t ,
L 2 [ t ] = 1 2 ( 3 t 2 1 ) .
The well-known recurrence relation can be used to obtain higher-order Legendre polynomials:
L n + 1 ( t ) = 1 n + 1 [ ( 2 n + 1 ) t L n ( t ) n L n 1 ( t ) ] .

2.2. Simulated Annealing Algorithm

Simulated annealing is a straightforward stochastic function minimizer. It is inspired by the annealing process, which involves heating a metal piece to a high temperature and letting it cool gently. This process enables the metal’s atomic structure to settle into a lower energy state, making it harder. Using optimization terminology, annealing enables the structure to leave a local minimum and to search for and arrive at a better, ideally global minimum. A new point is generated randomly near the present point with each iteration of the simulated annealing technique. The radius of the neighborhood decreases with each iteration.

3. Description of the Method

3.1. General Remarks

The orthogonal perceptron expands upon the classical perceptron algorithm utilized for binary classification applications. The perceptron is a fundamental building element that accepts input values, adds weights to them, and generates output in the context of machine learning and neural networks. The orthogonal perceptron introduces a significant change: it requires orthogonality between the weight vectors connected to various classes. The weight vectors are modified in conventional perceptrons to reduce misclassification errors. However, in the orthogonal perceptron, the weight vectors for various classes are restricted to be orthogonal to one another, in addition to being modified to minimize mistakes. In higher-dimensional spaces, the orthogonal constraint provides a discriminating component that can better explain class separation. Orthogonal weight vectors can lead to better decision boundaries and enhanced generalization.
A flow chart demonstrates the proposed technique in Figure 3. We have also explained the methodology stepwise.
Due to the similarity of both techniques, we discuss the combined steps for the Chebyshev and Legendre ANN techniques. For both methods, the entire procedure is the same except for the polynomial selection
  • Prepare the network by using the Chebyshev or Legendre polynomials.
  • Provide the network adaptive coefficients (NACs) for each polynomial.
  • Calculate and save the sum of the products of the NACs and Chebyshev polynomials or Legendre polynomials as a function μ or δ .
  • Use the Taylor series extension of the activation function t a n h ( . ) to activate the function μ or δ .
  • Now, construct the problem’s trial solution, which satisfies the given initial or boundary conditions.
  • Substitute the derivatives of the trial solution into the given situation and obtain the error function.
  • To minimize the error function, use simulated annealing optimization techniques.
  • If the assessment of the mean square error is in an acceptable range, then collect the value of the adaptive coefficients of the network and substitute it into the trial solution to obtain the results; otherwise, repeat the same process for various values of the NACs until the agreeable MSE is acquired.

3.2. Employment on Nonlinear FDEs

Here, we apply the proposed technique to fractional-order nonlinear differential equations of the following type:
D ν z ( t ) = F ( t , z , z ( t ) ) ,
subjected to the following conditions:
z ( 0 ) = α , z ( 0 ) = β .
Equation (8) can be written in the following form to apply the Chebyshev and Legendre ANN techniques:
D ν z t r ( t , ψ ) F ( t , z t r ( t , ψ ) , D z t r ( t , ψ ) ) = 0 , t [ 0 , 1 ] , 1 < ν 2 ,
where z t r is the trial solution of Equation (8) which satisfies the given initial or boundary conditions and ψ is network adaptive coefficients known as bias and weight. Let us try the trial solution for Equation (8) expressed as
z t r ( t , ψ ) = A + t N ( t , ψ ) ,
where N is the network output for the Chebyshev ANN. We apply the Taylor series to the t a n h ( . ) activation function to activate the sum of the product of the weights and orthogonal polynomials such that
N = t a n h ( μ ) = μ μ 3 3 + 2 μ 5 15 ,
Here, μ is given by
μ = i = 1 n ψ i T i 1 ,
where T i 1 represents the Chebyshev polynomials, which can be characterized with the following recursive formula:
T i + 1 ( t ) = 2 t T i ( t ) T i 1 ( t ) , i 2 ,
Here, T 0 ( t ) = 1 and T 1 ( t ) = t are the fundamental values of the Chebyshev polynomials.
For the Legendre ANN, the activation function N is defined by
N = t a n h ( δ ) = δ δ 3 3 + 2 δ 5 15 ,
Here, δ is given by
δ = i = 1 n ψ i L i 1 ,
where L i 1 represents the Legendre polynomials, which are defined by the following recursive formula:
L i + 1 ( t ) = 1 ( i + 1 ) ( 2 i + 1 ) t L i ( t ) 1 ( i + 1 ) i L i 1 ( t ) , i 2 ,
Here, L 0 ( t ) = 1 and L 1 ( t ) = t are the fundamental values of the Legendre polynomial.
Now, we can write the trial solution in terms of μ and δ for the Chebyshev and Legendre ANNs, respectively, and obtain the following:
z t r ( t , ψ ) = A + t μ μ 3 3 + 2 μ 5 15 and = A + t δ δ 3 3 + 2 δ 5 15 ,
Now, we put the values of μ and δ into Equation (17) for n = 2 and obtain
z t r ( t , ψ ) = A + t ( ψ 1 + t ψ 2 ) ( ψ 1 + t ψ 2 ) 3 3 + 2 ( ψ 1 + t ψ 2 ) 5 15 .
By applying the Caputo fractional-order derivative, we obtain
D ν z t r ( t , ψ ) = Γ 2 Γ ( 2 ν ) t 1 ν ψ 1 ψ 1 3 3 + 2 ψ 1 5 15 + 2 3 Γ 6 Γ ( 6 ν ) t 5 ν ( ψ 1 ψ 2 4 ) + 2 15 Γ 7 Γ ( 7 ν ) t 6 ν ( ψ 2 5 ) + Γ 3 Γ ( 3 ν ) t 2 ν ψ 2 ψ 1 2 ψ 2 + 2 3 ψ 1 4 ψ 2 + Γ 4 Γ ( 4 ν ) t 3 ν 4 3 ψ 1 3 ψ 2 2 ψ 1 ψ 2 2 + Γ 5 Γ ( 5 ν ) t 4 ν 4 3 ψ 1 2 ψ 2 3 ψ 2 3 3 .
Also, the mean square error for Equation (8) can be calculated as follows:
E ( ψ ) = j = 1 m 1 m D ν z t r ( t j , ψ i ) F ( t j , z t r ( t j , ψ i ) , D z t r ( t j , ψ i ) ) 2 , t [ 0 , 1 ] .
Here, we refer to Equation (20) as a fitness function and m as the number of trial points. We used a thermal minimization process known as simulated annealing to minimize the fitness function. This is a probabilistic method used to approximate the global optimum of a function. This method consists of three steps: perturb the solution, determine the consistency of the solution, and accept the solution if it is better than the improved one. The learning of NACs will be performed from Equation (18) by minimizing the MSE to the lowest possible acceptable minimum value.

3.3. Advantages of the Proposed Technique

  • To approximate the complex nonlinear interactions present in fractional differential equations, the orthogonal perceptron-based method uses the adaptability of artificial neural networks. It can capture complex behaviors that are exceedingly difficult to express using conventional numerical techniques.
  • Once trained, the orthogonal perceptron may generalize its learned patterns to fresh input data. This is extremely helpful when solving fractional differential equations with various initial conditions or parameters.
  • Traditional numerical techniques are frequently built with well-established convergence features. However, the neural network approach’s convergence is determined by the quality of the data and the architecture used.

4. Numerical Implementation

In this section, two fractional-order problems are solved using the ChNN and LeNN architectures. The numerical results show that the proposed technique is highly efficient and robust. All the computations were performed on a computer with an Intel Core i3 processor (Intel Corporation, Santa Clara, CA, USA) with 8 gigabytes of RAM, and the simulation was conducted with Mathematica 11.1.0 for each problem.
Problem 1. 
Consider the particular choice of the parameters L = 1 ,   M = 4 , and N = 3 in Equation (4). The fractional Lienard problem is given as follows [14,15,28]:
D ν z ( t ) D z ( t ) + 4 z 3 + 3 z 5 = 0 , 1 < ν 2 , t [ 0 , 1 ] ,
z ( 0 ) = α = τ 2 + κ a n d z ( 0 ) = β = 0 ,
where
τ = 4 3 a 2 3 b 2 16 a c a n d κ = 1 + 3 b ( 3 b 2 16 a c ) .
For ν = 2 , the exact solution is already known with the given conditions
z ( t ) = τ sec h 2 a t 2 + κ sec h 2 a t
Equation (22) presents the values of τ and κ .
As we discussed in Section 4, we constructed the trial solution as
z t r ( t , ψ ) = α + t 2 N ( t , ψ ) .
The given Lienard problem was solved by ChNN and LeNN techniques for the various values of ν and employed by dividing the domain into 10 equidistant training points with 6 NACs. The acquired appropriate MSEs were 1.39515 × 10 10 and 5.27835 × 10 10 , respectively. The computational time for Problem 1 was 0.05 s and 0.09 s, respectively. Table 1 shows the accurate values of the NACs after training by the SA algorithm. In Table 2, we have listed the approximated solution by our methods (ChNN and LeNN), the methods of Singh [14,15,28], and the exact solution. Table 2 shows the good agreement with these methods.
In Figure 4, we have compared the approximate solutions by the LeNN and ChNN methods with the solutions obtained by the methods given in [14,15,28]. Figure 5 and Figure 6 show the approximate solutions for the various values of ν .
From Figure 5 and Figure 6, we observed that the solution varied continuously from the fractional-order solution to the integer order. Therefore, we can say that the behaviors of approximate solutions for different fractional orders converge to integer-order solutions.
Problem 2. 
Consider the particular choice of the parameters L = 0.5 ,   P = 25 , and M = 25 in Equation (6). The fractional duffing equation is given as follows [14,15,28]:
D η z ( t ) + 0.5 D z ( t ) + 25 z + 25 z 3 = 0 , 1 < η 2 ,
z ( 0 ) = γ = 0.1 a n d z ( 0 ) = δ = 0 .
The exact solution of Problem 2 at η = 2 when using the differential transform method [16] is given as follows:
z ( t ) = 0.1 1.2625 t 2 + 0.2104 t 3 + 2.6828 t 4 0.5392 t 5 2.6563 t 6 + 0.6152 t 7 .
The trial solution can be written as
z t r ( t , ψ ) = 0.1 + t 2 N ( t , ψ ) .
The Duffing equation was solved with the ChNN and LeNN techniques for the various values of η . We trained the network using 10 equidistant points in the domain [ 0 , 0.1 ] with 6 NACs. The obtained MSEs were 1.61568 × 10 9 and 1.73644 × 10 8 , respectively. The appropriate values of the NACs using the SA algorithm are given in Table 3. The computational time for Problem 2 was 0.05 s and 0.03 s, respectively. In Table 4, we have listed the obtained solutions by the proposed methods (ChNN and LeNN) and the existing method’s solutions from [15,16,28]. Table 4 shows the good accuracy for the acquired results and the results given in [16].
In Figure 7, we have compared the approximate solutions with the proposed method and the solutions given in [15,16,28]. Figure 8 and Figure 9 show the solutions for the different values of η .
From Figure 8 and Figure 9, we observed that the solutions varied continuously from fractional-order solutions to integer-order solutions. Therefore, we can say that the behaviors of the approximate solutions for different fractional orders converged to an integer-order solution, and the periodic behavior of the solution can be seen.
Problem 3. 
Consider the particular choice of the parameters L = 1, M = 4 , and N = 3 in Equation (4). The fractional duffing equation is given as follows [14,44]:
D ν z ( t ) D z ( t ) + 4 z 3 3 z 5 = 0 , 1 < ν 2 ,
z ( 0 ) = c 1 = 2 L M a n d z ( 0 ) = c 2 = L L M 2 L M .
The exact solution for the considered problem (Problem 3) for ν = 2 is given as follows:
z ( t ) = 2 L ( 1 + t a n h L t ) M .
The trial solution can be written as
z t r ( t , ψ ) = c 1 + t 2 N ( t , ψ ) .
The given Duffing equation was solved with the ChNN and LeNN techniques for the various values of ν . We trained the network by taking 10 equidistant points in the domain [ 0 , 0.1 ] with 6 NACs. The obtained MSEs were 5.49991 × 10 9 and 2.69692 × 10 10 , respectively. The computational time for the problem was 0.08 s and 0.09 s, respectively. The appropriate values for the NACs using the SA algorithm are given in Table 5. In Table 6, we listed the outcomes by the proposed method, analytic method and the solutions obtained by other existing numerical methods [15,44]. In Table 7, we have shown the absolute error between the exact solution and solution obtained by the proposed technique and other existing techniques.
In Figure 10, we have compared the approximate solutions with the proposed methods with the exact solution and the solutions given in [15]. In Figure 11 and Figure 12, we have presented the solutions to Problem 3 for the various values of ν .
In Figure 11 and Figure 12, we can see that the solution varied continuously from fractional order to integer order.

5. Error Analysis

For the above problems, we have presented the error analysis of the numerical solutions of the ChNN and LeNN techniques. Initially, we trained the neural network using the SA algorithm and collected the appropriate values of the network parameters. After that, we substituted the NAC values into the trial solution and obtained the results for the ChNN or LeNN techniques (according to the polynomial). To analyze the precision of the method within the domain [0,1], we also substituted it into Equation (31):
E ( t , ψ ) = | D ν z t r ( t , ψ ) F ( t , z t r ( t , ψ ) , D z t r ( t , ψ ) ) | 0
where we found the z ( t ) approximated continuous results through the ChNN and LeNN techniques. E ( t ) approached 0 as the value of the MSE acquired with the ChNN and LeNN with the SA algorithm changed. The solution’s convergence depends upon the optimization algorithm, the number of network adaptive coefficients, and the neural network’s architecture, which we used.
For Problem 1, the mean square errors for the ChNN and LeNN techniques at ν = 2 were 1.39515 × 10 10 and 5.27835 × 10 10 , respectively, which showed that the minimum error for the ChNN and LeNN was 1.3 × 10 5 . This presents that both the strategies’ precisions were inversely proportional to the mean square error value. When we exchanged the polynomials and used the SA algorithm for the network training, we observed that both techniques were strongly affected, as can be seen in Table 1.
Problem 2 is known as the fractional-order Duffing equation. For η = 2 , the values of the MSE with 6 NACs by using ChNN and LeNN methods are 1.61568 × 10 9 and 1.73644 × 10 8 respectively. For the fractional value of η = 1.96 , 1.9 , the LeNN shows better results with minimum value for the MSE, however for η = 1.86 , 1.8 and 1.76 both techniques yielded a similar MSE. Problem 3 is also a fractional Lienard equation. We solved it approximately for the various values of ν and obtained better results with less computational time than other existing numerical techniques [15,44].

6. Conclusions

This article has solved the Lienard and Duffing fractional- and integer-order equations using the ChNN and LeNN techniques with the SA algorithm. The proposed approach is easy to implement on nonlinear FDEs with simplicity of evaluation. The method’s accuracy can be improved by enhancing the NAC learning methodology. The numerical results show that the proposed strategies give better results compared with other existing numerical techniques, such as DTM [16], the spectral collocation method [15], the Chebyshev operational matrix method [28], HAM [25], and the fractional homotopy analysis transform method [44]. For the solution of nonlinear problems, this technique does not require any linearization process. As a result, we can conclude that the method is exceptional and applicable to a broad range of nonlinear fractional-order differential equations that arise in engineering and science. The proposed technique can also be improved for precision in the future by improving the neural architecture and the learning technique of NACs. In the proposed technique, once the network has been trained, it allows continuous evaluation of the solution inside the domain. Also, it can be considered a powerful tool for the computation of nonlinear problems.

Author Contributions

Methodology and Simulation, A.V.; Conceptualization, W.S.; Supervision P.K.Y.; All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Science Centre, Poland under Grant No. 2017/27/B/ST8/00351.

Data Availability Statement

Not Applicable.

Acknowledgments

We express our sincere thanks to Late Manoj Kumar, who provided the inspiration for this research.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
FDE Fractional differential equation
SA Simulated annealing
ChNN Chebyshev neural network
LeNN Legendre neural network
NAC Network adaptive coefficient
COMM Chebyshev Operational Matrix Method
JSCM Jacobi Spectral Collocation Method
DTM Differential Transform Method
ECM Efficient Computational Method

References

  1. Talebi, S.P.; Godsill, S.J.; Mandic, D.P. Filtering Structures for α-Stable Systems. IEEE Control Syst. Lett. 2023, 7, 553–558. [Google Scholar] [CrossRef]
  2. Liénard, A. Etude des oscillations entretenues. Rev. Gen. l’Electr. 1928, 23, 901–912. [Google Scholar]
  3. Guckenheimer, J. Dynamics of the Van der pol equation. IEEE Trans. Circuit Syst. 1980, 27, 938–989. [Google Scholar] [CrossRef]
  4. Kong, D.X. Explicit exact solutions for the Lienard equation and its applications. Phys. Lett. A 1995, 196, 301–306. [Google Scholar] [CrossRef]
  5. Feng, Z. On explicit exact solutions for the Lienard equation and its applications. Phys. Lett. A 2002, 293, 50–56. [Google Scholar] [CrossRef]
  6. Diethelm, K.; Walz, G. Numerical solution of fractional order differential equations by extrapolation. Numer. Algorithms 1997, 16, 231–253. [Google Scholar] [CrossRef]
  7. Esmaeili, S.; Shamsi, M.; Luchko, Y. Numerical solution of fractional differential equations with a collocation method based on Müntz polynomials. Comput. Math. Appl. 2011, 62, 918–929. [Google Scholar] [CrossRef]
  8. Demirci, E.; Ozalp, N. A method for solving differential equations of fractional order. J. Comput. Appl. Math. 2012, 236, 2754–2762. [Google Scholar] [CrossRef]
  9. Han, W.; Chen, Y.M.; Liu, D.Y.; Boutat, D. Numerical solution for a class of multi-order fractional differential equations with error correction and convergence analysis. Adv. Differ. Equ. 2018, 253, 253. [Google Scholar] [CrossRef]
  10. Sumelka, W.; Luczak, B.; Gajewski, T.; Voyiadjis, G.Z. Modelling of AAA in the framework of time-fractional damage hyperelasticity. Int. J. Solids Struct. 2020, 206, 30–42. [Google Scholar] [CrossRef]
  11. Sumelka, W. Fractional calculus for continuum mechanics—Anisotropic non-locality. Bull. Pol. Acad. Sci. Tech. Sci. 2016, 64, 361–372. [Google Scholar] [CrossRef]
  12. Bohner, M.; Tunç, O.; Tunç, C. Qualitative analysis of caputo fractional integro-differential equations with constant delays. Comput. Appl. Math. 2021, 40, 214. [Google Scholar] [CrossRef]
  13. Tunç, O.; Tunç, C. Solution estimates to Caputo proportional fractional derivative delay integro-differential equations. Rev. R. Acad. Cienc. Exactas Fís. Nat. Ser. A Mat. 2023, 117, 12. [Google Scholar] [CrossRef]
  14. Singh, H. An efficient computational method for non-linear fractional Lienard equation arising in oscillating circuits. In Methods of Mathematical Modelling: Fractional Differential Equations; CRC Press: London, UK; New York, NY, USA, 2019. [Google Scholar]
  15. Singh, H.; Srivastava, H.M. Numerical Investigation of the Fractional-Order Liénard and Duffing Equations Arising in Oscillating Circuit Theory. Front. Phys. 2020, 8, 120. [Google Scholar] [CrossRef]
  16. Nourazar, S.; Mirzabeigy, A. Approximate solution for nonlinear Duffing oscillator with damping effect using the modified differential transform method. Sci. Iran. B 2013, 20, 364–368. [Google Scholar]
  17. Gómez-Aguilar, J.F.; Torres, L.; Yépez-Martínez, H.; Baleanu, D.; Reyes, J.M.; Sosa, I.O. Fractional Liénard type model of a pipeline within the fractional derivative without singular kernel. Adv. Differ. Equ. 2016, 2016, 173. [Google Scholar] [CrossRef]
  18. Feng, Z. Exact solutions for the Lienard equation and its applications. Chaos Solitons Fractals 2004, 21, 343–348. [Google Scholar] [CrossRef]
  19. Matinfar, M.; Hosseinzadeh, H.; Ghanbari, M. A numerical implementation of the variational iteration method for the Lienard equation. World J. Model. Simul. 2008, 4, 205–210. [Google Scholar]
  20. Xu, G.Q. New explicit exact solutions for the Liénard equation and its applications. arXiv 2010, arXiv:1003.2921v1. [Google Scholar]
  21. Janiczek, T. Generalisation of the modulating functions method into the fractional differential equations. Bull. Pol. Acad. Sci. Tech. Sci. 2010, 58, 593–599. [Google Scholar] [CrossRef]
  22. Atabakzadeh, M.H.; Akrami, M.H.; Erjaee, G.H. Chebyshev Operational Matrix Method for Solving Multi-order Fractional Ordinary Differential Equations. Appl. Math. Model. 2013, 37, 8903–8911. [Google Scholar] [CrossRef]
  23. Kazem, S. An integral operational matrix based on Jacobi polynomials for solving fractional-order differential equations. Appl. Math. Model. 2013, 37, 1126–1136. [Google Scholar] [CrossRef]
  24. Ezz-Eldien, S.S. New quadrature approach based on operational matrix for solving a class of fractional variational problems. J. Comput. Phys. 2016, 317, 362–381. [Google Scholar] [CrossRef]
  25. Ejikeme, C.L.; Oyesanya, M.O.; Agbebaku, D.F.; Okofu, M.B. Solution to nonlinear Duffing Oscillator with fractional derivatives using Homotopy Analysis Method (HAM). Glob. J. Pure Appl. Math. 2018, 14, 1363–1388. [Google Scholar]
  26. Morales-Delgado, V.F.; Gómez-Aguilar, J.F.; Torres, L.; Escobar-Jiménez, R.F.; Taneco-Hernandez, M.A. Exact Solutions for the Liénard Type Model via Fractional Homotopy Methods. In Fractional Derivatives with Mittag-Leffler Kernel; Springer: Cham, Switzerland, 2019; Volume 194, pp. 269–291. [Google Scholar] [CrossRef]
  27. Singh, H.; Srivastava, H.M.; Kumar, D. A reliable numerical algorithm for the fractional vibration equation. Chaos Solitons Fractals 2017, 103, 131–138. [Google Scholar] [CrossRef]
  28. Singh, H. Solution of fractional Lienard equation using Chebyshev operational matrix method. Nonlinear Sci. Lett. A 2017, 8, 397–404. [Google Scholar]
  29. Kumar, S.; Gomez-Aguilar, J.F.; Lavın-Delgado, J.E.; Baleanu, D. Derivation of operational matrix of Rabotnov fractional-exponential kernel and its application to fractional Lienard equation. Alex. Eng. J. 2020, 59, 2991–2997. [Google Scholar] [CrossRef]
  30. Adel, W. A fast and efficient scheme for solving a class of non-linear Lienard’s equations. Math. Sci. 2020, 14, 167–175. [Google Scholar] [CrossRef]
  31. Kumar, M.; Yadav, N. Multilayer perceptrons and radial basis function neural network methods for the solution of differential equations: A survey. Comput. Math. Appl. 2011, 62, 3796–3811. [Google Scholar] [CrossRef]
  32. Khan, N.A.; Shaikh, A.; Sultan, F.; Ara, A. Numerical Simulation Using Artificial Neural Network on Fractional Differential Equations. In Numerical Simulation—From Brain Imaging to Turbulent Flows; IntechOpen: London, UK, 2017. [Google Scholar] [CrossRef]
  33. Chakraverty, S.; Mall, S. Artificial Neural Networks for Engineers and Scientists Solving Ordinary Differential Equations; CRC Press: London, UK; New York, NY, USA, 2017. [Google Scholar]
  34. Verma, A.; Kumar, M. Numerical solution of Lane–Emden type equations using multilayer perceptron neural network method. Int. J. Appl. Comput. Math. 2019, 5, 141. [Google Scholar] [CrossRef]
  35. Shaikh, A.; Jamal, M.A.; Hanif, F.; Khan, M.S.A.; Inayatullah, S. Neural minimisation methods for solving variable order fractional delay differential equations with simulated annealing. PLoS ONE 2019, 14, e0223476. [Google Scholar] [CrossRef] [PubMed]
  36. Chakraverty, S.; Mall, S. Single layer Chebyshev neural network model with regression-based weights for solving nonlinear ordinary differential equations. Evol. Intell. 2022, 13, 687–694. [Google Scholar] [CrossRef]
  37. Verma, A.; Kumar, M. Numerical solution of Bagley–Torvik equations using Legendre artificial neural network method. Evol. Intell. 2020, 14, 2027–2037. [Google Scholar] [CrossRef]
  38. Verma, A.; Kumar, M. Numerical solution of third-order Emden–Fowler type equations using artificial neural network technique. Eur. Phys. J. Plus 2020, 135, 751. [Google Scholar] [CrossRef]
  39. Lefik, M. Some aspects of application of artificial neural network for numerical modeling in civil engineering. Bull. Pol. Acad. Sci. Sci. 2013, 61, 39–50. [Google Scholar] [CrossRef]
  40. Malik, S.A.; Qureshi, I.M.; Amir, M.; Haq, I. Numerical Solution of Lienard Equation Using Hybrid Heuristic Computation. World Appl. Sci. J. 2013, 28, 636–643. [Google Scholar]
  41. Kumar, S.; Sahoo, A.K.; Chakraverty, S. A Comparative study of Chebyshev and Legendre Polynomial-based Neural Models for Approximating Multidimensional Poverty for an Indian State. In Polynomial Paradigms: Trends and Applications in Science and Engineering; IOP Publishing: Bristol, UK, 2023. [Google Scholar] [CrossRef]
  42. Sahoo, A.K.; Chakraverty, S. A neural network approach for the solution of Van der Pol-Mathieu-Duffing oscillator model. Evol. Intell. 2023. [Google Scholar] [CrossRef]
  43. Pao, Y.H.; Philips, S.M. The functional link net and learning optimal control. Neurocomputing 1995, 9, 149–164. [Google Scholar] [CrossRef]
  44. Kumar, D.; Agarwal, R.P.; Singh, J. A modified numerical scheme and convergence analysis for fractional model of Lienard’s equation. J. Comput. Appl. Math. 2018, 339, 405–413. [Google Scholar] [CrossRef]
Figure 1. Structure of Chebyshev neural network.
Figure 1. Structure of Chebyshev neural network.
Symmetry 15 01753 g001
Figure 2. Structure of Legendre neural network.
Figure 2. Structure of Legendre neural network.
Symmetry 15 01753 g002
Figure 3. Pictorial presentation of the algorithm.
Figure 3. Pictorial presentation of the algorithm.
Symmetry 15 01753 g003
Figure 4. Comparison of approximate solutions at ν = 2 by ChNN, LeNN Method with [28], [14], [15] (Problem 1).
Figure 4. Comparison of approximate solutions at ν = 2 by ChNN, LeNN Method with [28], [14], [15] (Problem 1).
Symmetry 15 01753 g004
Figure 5. Nature of the approximate results with the LeNN method at ν = 2 , 1.96 , 1.9 , 1.86 , 1.8 , 1.76 (Problem 1).
Figure 5. Nature of the approximate results with the LeNN method at ν = 2 , 1.96 , 1.9 , 1.86 , 1.8 , 1.76 (Problem 1).
Symmetry 15 01753 g005
Figure 6. Nature of the approximate results with the ChNN method at ν = 2 , 1.96 , 1.9 , 1.86 , 1.8 , 1.76 (Problem 1).
Figure 6. Nature of the approximate results with the ChNN method at ν = 2 , 1.96 , 1.9 , 1.86 , 1.8 , 1.76 (Problem 1).
Symmetry 15 01753 g006
Figure 7. Comparison of approximate solutions at η = 2 by ChNN, LeNN method with [16], [28], [15] (Problem 2).
Figure 7. Comparison of approximate solutions at η = 2 by ChNN, LeNN method with [16], [28], [15] (Problem 2).
Symmetry 15 01753 g007
Figure 8. Nature of the approximate results with the LeNN method at η = 2 , 1.96 , 1.9 , 1.86 , 1.8 , 1.76 (Problem 2).
Figure 8. Nature of the approximate results with the LeNN method at η = 2 , 1.96 , 1.9 , 1.86 , 1.8 , 1.76 (Problem 2).
Symmetry 15 01753 g008
Figure 9. Nature of the approximate results with the ChNN method at η = 2 , 1.96 , 1.9 , 1.86 , 1.8 , 1.76 (Problem 2).
Figure 9. Nature of the approximate results with the ChNN method at η = 2 , 1.96 , 1.9 , 1.86 , 1.8 , 1.76 (Problem 2).
Symmetry 15 01753 g009
Figure 10. Comparison of approximate solutions at η = 2 by ChNN, LeNN Method with exact solution and [15] (Problem 3).
Figure 10. Comparison of approximate solutions at η = 2 by ChNN, LeNN Method with exact solution and [15] (Problem 3).
Symmetry 15 01753 g010
Figure 11. Nature of the approximate results with the ChNN method at ν = 2 , 1.95 , 1.85 , 1.70 (Problem 3).
Figure 11. Nature of the approximate results with the ChNN method at ν = 2 , 1.95 , 1.85 , 1.70 (Problem 3).
Symmetry 15 01753 g011
Figure 12. Nature of the approximate results with the ChNN method at ν = 2 , 1.95 , 1.80 , 1.70 (Problem 3).
Figure 12. Nature of the approximate results with the ChNN method at ν = 2 , 1.95 , 1.80 , 1.70 (Problem 3).
Symmetry 15 01753 g012
Table 1. The ideal estimations of NAC (Problem 1).
Table 1. The ideal estimations of NAC (Problem 1).
NAC W 1 W 2 W 3 W 4 W 5 W 6
C h N N −0.411740.3792640.4636210.3867010.03445110.0811897
L e N N −1.533051.02509−2.110921.1266−0.9694820.15995
Table 2. Comparison of numerical results at ν = 2 for Problem 1.
Table 2. Comparison of numerical results at ν = 2 for Problem 1.
t Exact
Solution
ChNNLeNN[28][14][15]
0.000.6435940.6435940.6435940.6435940.6435940.643594
0.010.6435560.6435240.6435240.6435240.6435240.643524
0.020.6434430.6433130.6433130.6433130.6433130.643314
0.030.6432550.6429590.6429590.6429590.6429590.642965
0.040.6429910.6424620.6424620.6424610.6424610.642477
0.050.6426530.6418210.6418210.6418180.6418180.641894
0.060.6422390.6410330.6410330.6410290.6410280.641082
0.070.6417510.6401000.6401000.6400930.6400920.640176
0.080.6411890.6390190.6390190.6390090.6390090.639130
0.090.6405530.6377900.6377900.6377770.6377760.637946
0.10.6398440.6364130.6364130.6363950.6363950.636623
Table 3. The ideal estimations of the NAC (Problem 2).
Table 3. The ideal estimations of the NAC (Problem 2).
NAC W 1 W 2 W 3 W 4 W 5 W 6
C h N N −0.845412.432090.544581.04621−0.04577330.165949
L e N N −1.541760.143329−0.674402−0.433421−0.617103−0.35283
Table 4. Comparison of numerical results at η = 2 for Problem 2.
Table 4. Comparison of numerical results at η = 2 for Problem 2.
t DTM [16]ChNNLeNN[28][15]
0.000.1000000.1000000.1000000.10000000.100000
0.010.0998740.0998740.0998740.09987450.099874
0.020.0994970.09949710.0994970.09950250.099502
0.030.0988710.09887150.09887150.09888940.098889
0.040.0980020.09800020.09800010.09804140.098041
0.050.0968860.09688650.09688640.09696440.096964
0.060.0955340.09553460.09553440.09566460.095664
0.070.0939490.0939490.09394880.09414840.094148
0.080.0921350.09213510.09213480.09242220.092422
0.090.0900980.09009850.09009820.09049240.090492
0.10.0878450.08784560.08784530.08836600.088366
Table 5. The ideal estimations of NAC (Problem 3).
Table 5. The ideal estimations of NAC (Problem 3).
NAC W 1 W 2 W 3 W 4 W 5 W 6
C h N N 0.8154131.237262.531081.039560.4988070.307813
L e N N − 1.04706−1.334360.254833−1.04073−0.113977−0.295584
Table 6. Comparison of numerical results at ν = 2 for Problem 3.
Table 6. Comparison of numerical results at ν = 2 for Problem 3.
t ExactChNNLeNN[15][44]
0.000.70710670.70710670.70710670.70710670.7071067
0.010.71063340.7108110.7107220.7106155
0.020.71414190.714320.7142310.71406990.7141419094
0.030.71763180.7178110.7177220.7174686
0.040.72110280.7212830.7211930.72081020.7211028634
0.050.72455440.7247360.7246450.7240935
0.060.72798620.7281680.7280770.72731710.7279862988
0.070.73139790.7315810.7314890.7304797
0.080.73478900.7349730.7348810.73358000.7347890065
0.090.73815910.7383440.7382510.7366167
0.10.74150790.7416930.7416010.73958860.7415079207
Table 7. Absolute error comparison from various numerical technique for Problem 3 at ν = 2 .
Table 7. Absolute error comparison from various numerical technique for Problem 3 at ν = 2 .
t Abs. Error
(ChNN)
Abs. Error
(LeNN)
Abs. Error [15]Abs. Error [44]
0.000000
0.01 1.776 × 10 4 8.85 × 10 5 1.79 × 10 5
0.02 1.780 × 10 4 8.90 × 10 5 7.19 × 10 5 1.8669 × 10 6
0.03 1.790 × 10 4 9.02 × 10 5 1.63 × 10 4
0.04 1.801 × 10 4 9.01 × 10 5 2.93 × 10 4 6.2706 × 10 6
0.05 1.810 × 10 4 9.06 × 10 5 4.61 × 10 4
0.06 1.818 × 10 4 9.08 × 10 5 6.69 × 10 4 4.94502 × 10 5
0.07 1.831 × 10 4 9.12 × 10 5 9.18 × 10 4
0.08 1.839 × 10 4 9.20 × 10 5 1.21 × 10 3 1.161249 × 10 4
0.09 1.849 × 10 4 9.19 × 10 5 1.54 × 10 3
0.1 1.851 × 10 4 9.31 × 10 5 1.91 × 10 3 2.24737 × 10 4
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Verma, A.; Sumelka, W.; Yadav, P.K. The Numerical Solution of Nonlinear Fractional Lienard and Duffing Equations Using Orthogonal Perceptron. Symmetry 2023, 15, 1753. https://doi.org/10.3390/sym15091753

AMA Style

Verma A, Sumelka W, Yadav PK. The Numerical Solution of Nonlinear Fractional Lienard and Duffing Equations Using Orthogonal Perceptron. Symmetry. 2023; 15(9):1753. https://doi.org/10.3390/sym15091753

Chicago/Turabian Style

Verma, Akanksha, Wojciech Sumelka, and Pramod Kumar Yadav. 2023. "The Numerical Solution of Nonlinear Fractional Lienard and Duffing Equations Using Orthogonal Perceptron" Symmetry 15, no. 9: 1753. https://doi.org/10.3390/sym15091753

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop