Next Article in Journal
RETRACTED: Sun et al. Application of Symmetry Law in Numerical Modeling of Hydraulic Fracturing by Finite Element Method. Symmetry 2020, 12, 1122
Previous Article in Journal
Gauge Symmetry of Magnetic and Electric Two-Potentials with Magnetic Monopoles
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Transforming Controlled Duffing Oscillator to Optimization Schemes Using New Symmetry-Shifted G(t)-Polynomials

Department of Applied Sciences, University of Technology, Baghdad 10066, Iraq
*
Author to whom correspondence should be addressed.
Symmetry 2024, 16(7), 915; https://doi.org/10.3390/sym16070915
Submission received: 20 June 2024 / Revised: 13 July 2024 / Accepted: 14 July 2024 / Published: 17 July 2024
(This article belongs to the Section Mathematics)

Abstract

:
This work introduces and studies the important properties of a special class of new symmetry-shifted G t - olynomials (NSSG). Such polynomials have a symmetry property over the interval [−2, 0], with G n 2 , 0 0 = 1 n G n 2 , 0 ( 2 ) . An explicit formulation of an NSSG operational matrixwas constructed, which served as a powerful tool for obtaining the desired numerical solutions. Then, a modified direct computational algorithm was suggested for solving the controlled Duffing oscillator problem. The idea behind the proposed algorithm is based on using symmetry basis functions, which are important and have real-world applications in physics and engineering. The original controlled Duffing oscillator problem was transformed into a nonlinear quadratic programming problem. Finally, numerical experiments are presented to validate our theoretical results. The numerical results emphasize that the modified proposed approach reaches the desired value of the performance index, with a few computations and with the minimum order of the NSSG basis function when compared with the other existing method, which is an important factor to consider when choosing the appropriate method in other mathematical and engineering applications.

1. Introduction

Optimal control is a mathematically challenging and practically significant goal. They have been many successful practical applications in a wide range of disciplines, including engineering [1,2,3,4], physics [5,6] and fluid dynamics [7]. Recently, the controlled Duffing oscillator problem, which is known to describe many important oscillating phenomena in nonlinear engineering systems, has received considerable attention. The classical Duffing equation was introduced to study electronics [8], signal processing [9], fuzzy modeling and the adaptive control of uncertain chaotic systems [10,11]. Since most controlled Duffing oscillator problems cannot be solved explicitly, it is often necessary to resort to numerical techniques, which consist of appropriate combinations of numbers numerical and optimization techniques.
The study of numerical methods has provided an attractive field for researchers of the mathematical sciences. This field of study has seen the appearance of different numerical computational methods and efficient algorithms to solve the controlled Duffing oscillator problem, with each one having disadvantages and advantages. For example, a direct method in [12] was presented to treat the controlled Duffing oscillator numerically. This method requires that both the state control variables, the constraint dynamic system, the boundary conditions and the cost function value be expanded in a Chebyshev series with unknown coefficients. The unknowns that evolve from the Chebyshev series expansion of the cost function value have to be determined at each iteration step. As a result, a large complicated nonlinear system of equations must be solved to obtain an accuracy with a suitable order. A cell-averaging Chebyshev spectral method was presented in [13], and it is based on constructing an interpolation polynomial of the degree n using Chebyshev nodes to approximate both the state and control vectors. The integral and differential expressions that arise from the system dynamics and the cost function are transformed into a nonlinear programming problem, while the work presented in [14] contained a pseudospectral approximate numerical solution for Duffing oscillators that use a differential matrix for Chebyshev points in order to compare the boundary conditions over the interval [–1, 1]. The properties of hybrid functions [15], which consist of block-pulse functions plus Legendre polynomials, are studied in [16] for the numerical treatment of Duffing oscillators. The operation matrix of an integration matrix together with hybrid functions have been utilized to reduce the solution of a controlled Duffing oscillator to a solution of algebraic equations. Other numerical treatments for solving Duffing oscillators are as follows: the interpolating scaling functions method [17], the state parameterization based on a linear combination of Chebyshev polynomials [18] and the Chebyshev spectral method [19] together with control variables and state variables.
The base function mentioned may be polynomials or wavelets. Some examples of these wavelets and polynomials include the following: wavelet neural networks [20], shifted wavelets [21], Boubaker wavelets [22], Boubaker polynomials [23], power polynomials [24] and radial basis functions [25,26].
In this paper, we propose new symmetry-shifted G t - polynomials (NSSP) as a base function. These novel polynomials are used to solve the controlled Duffing oscillator problem using a direct state parameterization technique. The idea of this method consists of reducing the controlled Duffing oscillator problem to an optimization problem by expanding the second derivative of the state vector as an NSSP that has unknown coefficients with the aid of an operation matrix of derivatives. The operational matrix of the product is introduced, and this matrix together with the operational matrix of derivatives are then used to transfer the original problem to an optimization one.
The paper is organized as follows: In Section 2, the definition of the new symmetry-shifted G t - polynomial over the interval [a, b] is presented with some important properties. The NSSP main properties are adopted in Section 3. The controlled Duffing oscillator problem is presented in Section 4, while the proposed algorithm for solving such problem is illustrated in Section 5, where it is converted to a quadratic programming problem. In Section 6, approximated findings and a comparison are made with an existing method in the literature to demonstrate the efficiency and the accuracy of the proposed numerical scheme.

2. New Symmetry-Shifted G t - Polynomial Over [ a , b ]

The definition of the new symmetry-shifted G t - polynomial over [ a ,   b ] denoted by G n a , b ( t ) is stated below.
Definition 1. 
Let  G ( t )  be a linear polynomial  G t = p t + q The symmetry-shifted  G t polynomial over the interval  [ a ,   b ]  is defined by the recurrence relation
G n a , b t = G t G n 1 a , b t G n 2 a , b t ,             n 2 ,
with the initial conditions
G 0 a , b t = 2 ,   G 1 a , b t = G t .
It is mentioned here that, from the  G n a , b ( t )  for special values of  a ,   b ,   p   a n d   q , one can obtain some of the well-known polynomials. Certain cases of these values are reported in Table 1.
Note that the NSSG polynomials with the values a = 1 ,     b = 1 ,     p = 2 , q = 0 and G t = 2 t will lead to the well-known Pell–Lucas polynomials [27,28].
G n 1 , 1 t = 2 t G n 1 1 , 1 t G n 2 1 , 1 t , n 2 ,
with the initial conditions
G 0 1 , 1 t = 2 ,   G 1 1 , 1 ( t ) = 2 t .
The first of several G n a , b ( t ) are listed below
G 0 a , b ( t ) = 2 ,   G 1 a , b ( t ) = 2 2 t a b b a ,   G 2 a , b ( t ) = ( 2 2 t a b b a ) 2 2 , G 3 a , b ( t ) = ( 2 2 t a b b a ) 3 6 2 t a b b a ,   G 4 a , b ( t ) = ( 2 2 t a b b a ) 4 4 ( 2 2 t a b b a ) 2 + 2 , G 5 a , b ( t ) = ( 2 2 t a b b a ) 5 5 ( 2 2 t a b b a ) 3 + 10 2 t a b b a .
The leading coefficient of G n a , b ( t ) is equal to 2 n + 1 b a for n = 1 , 2 , 3 , 4 , as we can notice from the recursive formula shown above.
Moreover, the explicit analytical formula for G n a , b ( t ) can be obtained through the following expression:
G n a , b ( t ) = n i = 0 n 2 1 j i j i i ( G t ) n 2 i , n = 1 , 2 , 3 , . . .
The polynomials G n a , b ( t ) have orthogonal properties with respect to the following inner product:
G n a , b ( t ) , G m a , b ( t ) = a b ( G n a , b ( t ) G m a , b ( t ) ω t ) d t ,
where ω t is the weight function.
Note that the general matrix form of G n a , b ( t ) can be written as below:
G a ,   b ( t ) = H T t T ,    
where G a , b ( t ) = [ G 0 a , b t G 1 a , b t G 2 a , b t G n a , b t ] T , T ( t ) = 1 t t 2 t n and H is the triangle matrix in which entries h i j can be evaluated as below:
h i j =       4 b a , f o r     j = 0 , 4 b a h i 1 , j 1 + h i 1 , j 1 2 h i 2 , j , f o r     i j ,       0 , o t h e r w i s e .

3. The Properties of NSSG Polynomials

3.1. The Convergence Analysis

Theorem 1. 
A function  u ( t )  that is continuous on  [ a ,   b ]  and satisfies the condition  u ( t ) < M , where  M  is a constant and may be defined by
u t = k = 0 a k G n a , b t ,
where
a k = a b u t G n a , b t d t
and
u n t = k = 0 n a k G n a , b ( t )
will converge to function  u t .
Proof. 
Let u n t = k = 0 n a k G n a , b ( t ) , where a k = a b u t G n a , b ( t ) d t , then
a b u t u n t d t = a b u ( t ) k = 0 n a k G n a , b ( t ) d t ,
where u ( x ) is defined in Equation (6). From Equation (7), one can obtain
a b u t u n t d t = k = 0 n a k a b u ( t ) G n a , b ( t ) d t = k = 0 n a k 2 .
This means that function u n ( t ) is a convergence (since it is a Cauchy sequence in the complete Hilbert space L 2 [ a , b ] ).
In order to prove that the series in Equation (6) converges to u ( t ) ,
Let
u m t = k = 0 m a k G n a , b ( t )   for   m < n .
Hence,
      u n t u m t = k = 0 n a k G n a , b ( t ) k = 0 m a k G n a , b ( t ) ,   n > m .
Equation (10) leads to u n t u m t = i = m + 1 n a k G n a , b ( t ) , n > m .
As a result
u n t u m ( t ) 2 = k = m + 1 n a k 2 ,   n > m .
This shows that k = 0 a k 2 is convergent.
From Equation (12), one can conclude that u n t u m ( t ) 2 0 as n , m and u n ( t ) u ( t ) , which is the required result. □

3.2. The Operation NSSG Matrix of Products

The product of two NSSGs can be expressed in the following theorem.
Theorem 2. 
The product of two NSSG polynomials satisfies the following relationship
G m a , b ( t ) G n a , b ( t ) = G m + n a , b ( t ) + G m n a , b ( t ) .
Proof. 
This theorem is proved by mathematical induction. Note that G 0 a , b ( t ) = 2 ; thus, it follows that
G 0 a , b ( t ) G n a , b ( t ) = 2 G n a , b ( t ) .
Therefore, Theorem 2 is valid when  = 0 . Let Equation (13) be valid for all integers m 1 , that is,
G m 1 a , b t G n a , b t = G m + n 1 a , b t + G m n 1 a , b t .
Multiplying both sides of Equation (15) by 2 2 t a b b a yields
2 2 t a b b a G m 1 a , b t G n a , b t = 2 2 t a b b a G m + n 1 a , b t + 2 2 t a b b a G m n 1 a , b t .
Then, from Equation (15) and with the use of Equation (1) one can obtain
G m a , b t + G m 2 a , b t G n a , b t = G m + n a , b t + G m + n 2 a , b t + G m n a , b t + G m n 2 a , b t
and
G m a , b t G n a , b t = G m + n a , b t + G m + n 2 a , b t + G m n a , b t + G m n 2 a , b t
G m 2 a , b t G n a , b t .
Using Equation (15), yields
G m a , b t G n a , b t = G m + n a , b t + G m + n 2 a , b t + G m n a , b t + G m n 2 a , b t G m + n 2 a , b t G m n 2 a , b t ,  
i.e., G m a , b t G n a , b t = G m + n a , b t + G m n a , b t . This is the required result. □

3.3. The Operation NSSG Matrix of Derivatives

In this part, the first derivative G ˙ n a , b ( t ) is determined in terms of itself. Based on that, the first derivative operational matrix of NSSG will be constructed.
Theorem 3. 
For  n 1 , the following relation can be employed to relate the original NSSG with their first derivative
G ˙ n a , b ( t ) = 4 n b a i = 1 n 2 G n 2 i + 1 a , b t + 2 n b a δ a , b ,
where
  δ a , b = G 0 a , b , i f   n   o d d , 0 , i f   n   e v e n .
Proof. 
Consider the odd case, where the induction is proceeded on n . For n = 1 , the left-hand side is equal to 2 b a G 0 a , b . Assume that the relation in Equation (18) holds for n 1 , and n odd.
The validity for n will be proved.
If one differentiates Equation (1), the following is obtained:
G ˙ n a , b ( t ) = G ( t ) G ˙ n 1 a , b ( t ) + G ˙ ( t ) G n 1 a , b ( t ) G ˙ n 2 a , b ( t ) ,
where G t = G 1 a , b ( t ) = 2 2 t a b b a and G ˙ t = 4 b a .
Appling the principle of induction on G n 1 a , b ( t ) and G n 2 a , b t ,   gives
G ˙ n a , b ( t ) = G ( t ) 4 b a ( n 1 ) i = 1 n 3 G n 2 i a , b ( t ) + 2 ( n 1 ) b a G ( t ) G 0 a , b ( t ) + 4 b a G n 1 a , b ( t ) 4 b a n 2 i = 1 n 4 G n 2 i + 1 a , b t + 2 ( n 2 ) b a G 0 a , b .
This will lead to
G ˙ n a , b ( t ) = 4 b a ( n 1 ) i = 1 n 3 G ( t ) G n 2 i a , b ( t ) + 2 ( n 1 ) b a G 1 a , b ( t ) G 0 a , b ( t ) + 4 b a G n 1 a , b ( t ) 4 b a n 2 i = 1 n 4 G n 2 i + 1 a , b t + 2 ( n 2 ) b a G 0 a , b .
Now, using the identity below
G t G n 1 a , b t = G n a , b t + G n 2 a , b t ,
as well as Equation (13) in Theorem 2,one can enable obtaining the following result:
G ˙ n a , b t = 4 b a i = 1 n 3 n 1 G n 2 i + 1 a , b t + G n 2 i 1 a , b t + 4 ( n 1 ) b a G 1 a , b ( t ) + 4 b a G n 1 a , b ( t ) 4 b a n 2 i = 1 n 4 G n 2 i + 1 a , b t + 2 ( n 2 ) b a G 0 a , b .
After performing some manipulation, Equation (22) can be written as below:
G ˙ n a , b ( t ) =   4 n b a i = 1 n 2 G n 2 i + 1 a , b ( t ) + 2 n b a G 0 a , b ( t ) .
This is equivalent to the result in Equation (18). In a similar way, one can prove the case for an even order.
On the other hand, the derivative of G n a , b can be written in matrix form as illustrated in the following result: □
Corollary 1. 
Let  t  be the NSSG polynomial vector defined as below
t = [ G 0 a , b ( t )     G 1 a , b ( t )     G 2 a , b ( t ) G n a , b ( t ) ] .
Then, for  n 1 , the derivative of  t  can be explicitly constructed by
˙ t = D t
where  D = d i j  is the  n + 1 × ( n + 1 ) , which is a lower triangular NSSG polynomial operational matrix of derivatives. For odd   n , the matrix  D  is obtained as below
D = 0 0 0 0 0 0 0 0 2 b a 0 0 0 0 0 0 0 0 8 b a 0 0 0 0 0 0 6 b a 0 12 b a 0 0 0 0 0 0 16 b a 0 16 b a 0 0 0 0 10 b a 0 20 b a 0 20 b a 0 0 0 0 24 b a 0 24 b a 0 24 b a 0 0 2 n b a 0 4 n b a 0 4 n b a 0 4 n b a .
Meanwhile, for even  n , the last row of the matrix  D  is constructed as below:
0 4 n b a 0 4 n b a 4 n b a
Moreover, the element of matrix can be obtained explicitly in the following form:
d i j = 4 n b a j ,     i j ,   i + j   o d d , 0 , o t h e r w i s e .
where  j = 1 / 2 , j = 0 , 1 , o t h e r w i s e .

3.4. The Relation between NSSG Over 2 , 0 and the Power Function t n

The first six NSSG polynomial over the interval [−2, 0] is given by
G 0 2 , 0 = 2 , G 1 2 , 0 = 2 t + 2 , G 2 2 , 0 = 4 t 2 + 8 t + 2 , G 3 2 , 0 = 8 t 3 + 24 t 2 + 18 t + 2 , G 4 2 , 0 = 16 t 4 + 64 t 3 + 80 t 2 + 32 t + 2 , G 5 2 , 0 = 32 t 5 + 160 t 4 + 280 t 3 + 200 t 2 + 50 t + 2 , G 6 2 , 0 = 64 t 6 + 384 t 5 + 864 t 4 + 896 t 3 + 420 t 2 + 72 t + 2 ,
which can be rewritten in the following form
1 = 1 2 G 0 2 , 0 , t = 1 2 ( G 1 2 , 0 G 0 2 , 0 ) , t 2 = 1 4 ( G 2 2 , 0 4 G 1 2 , 0 + 3 G 0 2 , 0 ) , t 3 = 1 8 ( G 3 2 , 0 6 G 2 2 , 0 + 15 G 1 2 , 0 10 G 0 2 , 0 ) , t 4 = 1 16 ( G 4 2 , 0 8 G 3 2 , 0 + 28 G 2 2 , 0 56 G 1 2 , 0 + 35 G 0 2 , 0 ) t 5 = 1 32 ( G 5 2 , 0 10 G 4 2 , 0 + 45 G 3 2 , 0 120 G 2 2 , 0 + 210 G 1 2 , 0 126 G 0 2 , 0 ) , t 6 = 1 64 ( G 6 2 , 0 12 G 5 2 , 0 + 66 G 4 2 , 0 220 G 3 2 , 0 + 495 G 2 2 , 0 792 G 1 2 , 0 + 462 G 0 2 , 0 ) .
This means that the powers of t can be expressed in terms of the NSSG polynomials of degrees up to n . The general explicit formula is given by the following result.
Theorem 4. 
For every integer  n 0 ,  power  t n  can be expanded in a unique way as a linear combination of  G n 2 , 0 t  as follows:
t n = 1 2 n k = 0 n 1 1 k 2 n k G n k 2 , 0 t + 1 n 2 n ! 2 ( n ! ) 2 G 0 2 , 0 t .
Proof. 
Mathematical induction is used to prove Equation (23) for n = 1 .
t 1 = 1 = G 0 2 , 0 t ; therefore, Equation (23) is true for n = 1 . Let Equation (23) be valid for n , where n 1 . This means that
t n = 1 2 n k = 0 n 1 1 k 2 n k G n k 2 , 0 t + 1 n 2 n ! 2 ( n ! ) 2 G 0 2 , 0 t .
Multiplying both sides of Equation (24) by ( 2 t + 2 ) yields
t n + 1 = t n + 1 2 n + 1 k = 0 n 1 1 k 2 n k t G n k 2 , 0 t + 1 n 2 n ! 2 n ! 2 x G 0 2 , 0 t .
Using Equation (24) yields
t n + 1 = 1 2 n k = 0 n 1 1 k 2 n k G n k 2 , 0 t + 1 n 2 n ! 2 n ! 2 G 0 2 , 0 t + 1 2 n + 1 k = 0 n 1 1 k 2 n k t G n k 2 ,   0 t + 1 n 2 n ! 2 ( n ! ) 2 x G 0 2 , 0 t .
Hence, the required result can be obtained after using the following identities:
  1 + n 1 n 0 = n = n 1 n + 1 0   ,   n i + 1 + n i = n + 1 i + 1 ,   i = 0,1 ,

4. The NSSG Technique for Solving the Controlled Duffing Oscillator Problem

Consider the following controlled Duffing oscillator problem of a linear oscillator, as shown in [6]:
J = 0.5 T 0 u t 2 d t , T t 0 ,   T   is   known ,
which is subject to
x ¨ t + w x t = u t
together with the conditions
x T = α ,     x ˙ T = ρ ,     x 0 = 0 ,   x ˙ 0 = 0 .
The problem is in finding control vector u t , which minimizes (25) to be subject to (26) and (27). The exact solution of the controlled linear oscillator can be obtained by applying Pontryagin’s maximum principle:
x t = 1 2 w A w t   s i n w t + B ( s i n w t w t   c o s w t ) , u t = A   c o s w t + B   s i n w t , J = 1 8 w 2 w T A 2 + B 2   + A 2 B 2   s i n 2 w T 4 A B s i n 2 w T ,
where
A = 1 C 2 w x w 2 T   s i n w T x ˙ ( w T   c o s w T s i n w T , B = 1 C ( 2 w 2 x ˙ T   s i n w T x ( sin w T + w T cos w T ) , C = ( w 2 T 2 s i n 2 w T
Consider an approximation to state variable x ( t ) using NSSG polynomials with order n as below:
x n t = k = 0 n a k G k T , 0 ( t ) .
The boundary conditions in Equation (27) must satisfy the approximate solution in Equation (28):
x n T = k = 0 n a k G k T , 0 ( T ) = α ,
x ˙ n T = k = 0 n a k G ˙ k T , 0 ( T ) = ρ ,
x n ( 0 ) = k = 0 n a k G k T , 0 ( 0 ) = 0 ,
x ˙ n ( 0 ) = k = 0 n a k G ˙ k T , 0 ( 0 ) = 0 .
Then, control variable u ( t ) is obtained from Equation (26) as
u n t = k = 0 n a k G k T , 0 t + w k = 0 n a k G ¨ k T , 0 t .
Then, substituting Equation (33) into Equation (25) gives
J = 0.5 T 0 u n t 2 d t .
As a result, the controlled Duffing oscillator problem (25)–(27) is transformed into a nonlinear quadratic programming problem, which can be stated as follows:
Let
H = 0.5 T 0 u t 2 d t ,   F = x n T x ˙ n T x n ( 0 ) x ˙ n ( 0 )   and   c = α ρ 0 0 ,
then,
min                 J = 1 2 a T H a
Is subject to the equality constraint
F a c = 0 .
This is a nonlinear quadratic programming problem. The unknown parameters of vector a have to be determined with the below:
a * = H 1 F T F H 1 F T 1 c .
The advantages of the proposed NSSG polynomial technique in terms of solving the controlled Duffing oscillator problem include the following points:
(1)
It can deal directly with the second-order derivative in the constrained differential equation, Equation (26), without reducing it to a first-order system. As a result, the number of unknown parameters will be reduced, unlike the technique in [25].
(2)
It can deal directly with interval t T ,   0 while other methods must introduce a suitable transformation according to the base functions [18].
(3)
The problem is reduced to a quadratic programming problem, which is much easier than the numerical integration of a nonlinear TPBVP derived from Pontryagin’s maximum principle method [26].

5. Results and Discussion

Problems (25)–(27) can be solved in the standard case: w = 1 ,   T = 2 , α = 0.5 and ρ = 0.5 . By performing Equations (28)–(35) with n = 4 , one can obtain
J = 4 a 0 2 + 88 3 a 0 a 2 + 1912 15 a 0 a 4 + 4 3 a 1 2 + 312 5 a 1 a 3 + 4216 7 a 2 a 4 + 276 5 a 2 2 + 25604 35 a 3 2 + 1328108 315 a 4 2 .
These are subject to the constraints
2 a 0   2 a 1 + 2 a 2 2 a 3 + 2 a 4 = 0.5 , 2 a 1   8 a 2 + 18 a 3 32 a 4 = 0.5 , 2 a 0 +   2 a 1 + 2 a 2 + 2 a 3 + 2 a 4 = 0 , 2 a 1 +   8 a 2 + 18 a 3 + 32 a 4 = 0 ,
which can be written as
J = 1 2 a T H a .
This is subject to the equality constraints F a c = 0 ,
where = 8 0 88 3 0 1912 15 0 8 3 0 312 5 0 88 3 0 552 5 0 4216 7 0 312 5 0 51208 35 0 1912 15 0 4216 7 0 2656216 315 ,   a T = a 0 a 1 a 2 a 3 a 4 ,
F = 2 2 2 2 2 0 2 8 18 32 2 2 2 2 2 0 2 8 18 32   a n d   c = 0.5 0.5 0 0 .
When using Equation (31) to obtain the optimal unknown vector a T , we have
a T = 0.0877800707547 0.125 0.0392099056603 0 0.0019899764150 .
The NSSG polynomial coefficients for state function x ( t ) and control function u ( t ) are listed in Table 2 with different values of n . In Table 3 and Table 4, the approximated values for the state and control functions are determined for different values of t   2 ,   0 using different orders of the basis functions n = 4 ,   5 ,   6   a n d   7 .
Also, these solutions were plotted according to same values illustrated in Figure 1.
As shown in Table 5, a comparison was made between the present method against the solution obtained by radial basis functions [25] for different values of n . This method is based on radial basis functions to approximate the solution of the optimal control problem using the collocation method. Table 6 compares the values of x ( t ) and u ( t ) , which were obtained by our method with the existing findings in [25]. The results in [25] needed three numerical methods that will increase the computational time and effort.
As n increases, the absolute errors J e x a c t J G ( t ) and J x a c t J G ( t ) J e x a c t decrease significantly, and the results will rapidly tend to the exact values. This table illustrates that the NSSG polynomials have a good convergence rate.
The absolute errors x e x a c t x G ( t ) and u e x a c t u G ( t ) , as well as the relative errors x e x a c t x G ( t ) x e x a c t and u e x a c t u G ( t ) u e x a c t , are listed in Table 7 for n = 7 , and they are also presented graphically in Figure 2.
Figure 2 shows the decreasing absolute error by the NSSG polynomials with the state parameterization technique. Figure 3 is the last illustration that displays the graph of the absolute values of J with different orders of the N S S G polynomials. All the tables and figures show that the suggested methodology is capable of providing numerical solutions for the controlled osculation problem with high accuracy. The solution obtained from the proposed approach is in good agreement with the existing results, thus demonstrating the reliability of the proposed schemes.
The above figures show the decreasing absolute error by the suggested algorithm based on NSSG polynomials. The optimal performance index value J , which is obtained by the proposed method for n = 4 ,   5 ,   6   a n d   7 , is a good approximation. The approximate state and control variables are, respectively, as below
x 4 t = 1.086656969857170 × 10 14 t 0.031839622641504 t 4 0.127358490566016 t 3 0.002358490566011 t 2 + 6.791606061607311 × 10 16 ,
u 4 t = 0.031839622641504 t 4 0.127358490566016 t 3 0.384433962264059 t 2 0.764150943396085 t 0.004716981132020 ,
x 5 t = 0.002276490066225 t 5 0.020457172310378 t 4   0.105731834936877 t 3 + 0.016991674996903 t 2 + 0.007683153973521 t +   8.536837748351163 × 10 4 ,
u 5 t =   0.002276490066225 t 5 0.020457172310378 t 4 0.060202033612374 t 3 0.228494392727635 t 2     0.626707855647741 t + 0.034837033768642 .
x 6 t = 0.001738299522379 t 6 +     0.012706287200501 t 5 + 0.001839575251112 t 4 0.089491560685423 t 3 + 0.012496082870642 t 2 2.411308712488788 × 10 17 t 3.763967600050567 × 10 18 ,
u 6 t = 0.001738299522379 t 6 + 0.012706287200501 t 5 + 0.053988560922489 t 4 + 0.164634183324589 t 3 + 0.034570985883986 t 2 0.536949364112541 t + 0.024992165741284 ,
x 7 t = 1.088071652483369 × 10 4 t 7 + 9.766493656408826 × 10 4 t 6 + 0.010676772128411 t 5 6.914985419546381 × 10 4 t 4 0.090929906788370 t 3 + 0.012212138480162 t 2 1.5346542938353 × 10 17 t + 4.37916033730 × 10 18 ,
u 7 t = 1.088071652483369 × 10 4 t 7 + 9.766493656408826 × 10 4 t 6 + 0.006106871187980 t 5 +     0.028607982427272 t 4 + 0.122605535779841 t 3 + 0.003914155976706 t 2   0.545579440730220 t +   0.024424276960324 ,

6. Conclusions

A special optimal control problem called the controlled Duffing oscillator was treated numerically in the present work using NSSG polynomials. The suggested technique uses the constructed NSSG polynomial operational matrix of first-order derivatives in combination with an appropriate direct parameterization scheme. The controlled Duffing oscillator problem was treated approximately with special values of unknown parameters utilizing various orders of NSSG polynomials. The outcomes of the approximate solutions have demonstrated the simplicity and the accuracy of the presented direct technique. Additionally, the proposed methodology can be used in a number of applications to numerically treat different classes of the optimal control problem. The presented results prove and support the satisfactory accuracy and efficiency of the recommended method. It can deal directly with the highest-order derivatives in a constrained differential equation without reducing it to a first-order system. As a result, the number of the unknown parameters can be reduced. This fact was shown when applying the presented NSSG polynomials on the controlled Duffing oscillator. The suggested direct technique is much easier to use than the numerical integration of the nonlinear TPBVP derived from Pontryagin’s maximum principle method. The proposed method can be extended to the nonlinear calculus of variations and optimal control problems.

Author Contributions

Methodology, F.H.; validation, S.S.; formal analysis, F.H. and S.S.; resources, F.H.; supervision, S.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Maawiya, O.S.; Rabie, Z.S.; Ahmed, O.A.; Hadi, O.A.; Sid, A.O. Optimal control problems governed by a class of nonlinear systems. AIMS Math. 2024, 9, 440–452. [Google Scholar] [CrossRef]
  2. Ahmed, A.K.; Al-Khazraji, H.; Raafat, S.M. Optimized PI-PD Control for Varying Time Delay Systems Based on Modified Smith Predictor. Int. J. Intell. Eng. Syst. 2024, 17, 331–342. [Google Scholar]
  3. Mahdi, S.M.; Lutfy, O.F. Control of a servo-hydraulic system utilizing an extended wavelet functional link neural network based on sine cosine algorithms. Indones. J. Electr. Eng. Comput. Sci. 2022, 25, 847–856. [Google Scholar] [CrossRef]
  4. Ramadhan, N.M.; Raafat, S.M.; Mahmood, A.M. Optimized Event-Based PID Control for Energy-Efficient Wireless Sensor Networks. Math. Model. Eng. Probl. 2024, 11, 63–74. [Google Scholar] [CrossRef]
  5. Steven, B.; David, A. Optimal control in stochastic thermodynamics. J. Phys. Commun. 2023, 7, 033001. [Google Scholar]
  6. Uriostegui, L.U.; Tututi, H.E. Master-slave synchronization in the Rayleigh and Duffing oscillators via elastic and dissipative couplings. Rev. De Cienc. Tecnológicas 2022, 5, 151–164. [Google Scholar] [CrossRef]
  7. Abergel, F.; Temam, R. On Some Optimal Control Problems in Fluid Mechanics. Theor. Comput. Fluid Mech. 1990, 1, 303–325. [Google Scholar] [CrossRef]
  8. Stokes, J.J. Nonlinear Vibrations; Intersciences: New York, NY, USA, 1950. [Google Scholar]
  9. Hu, N.; Wen, X.S. The application of Duffing oscillator in characteristic signal detection of early fault December. J. Sound Vib. 2003, 268, 917–931. [Google Scholar] [CrossRef]
  10. Bahraini, M.S.; Mahmoodabadi, M.J.; Lohse, N. Robust Adaptive Fuzzy Fractional Control for Nonlinear Chaotic Systems with Uncertainties. Fractal Fract. 2023, 7, 484. [Google Scholar] [CrossRef]
  11. Chen, T.; Cao, X.; Niu, D. Model modification and feature study of Duffing oscillator. J. Low Freq. Noise Vib. Act. Control 2022, 41, 230–243. [Google Scholar] [CrossRef]
  12. Van Dooren, R.; Vlassenbroeck, J. Chebyshev series solution of the controlled Duffing oscillator. J. Comput. Phys. 1982, 47, 321–329. [Google Scholar] [CrossRef]
  13. Elnagar, G.N.; Kazemi, M.A. A cell-averaging Chebyshev spectral method for the controlled Duffing oscillator. Appl. Numer. Math. 1995, 18, 461–471. [Google Scholar] [CrossRef]
  14. Razzaghi, M.; Elnagarj, G. Numerical solution of the controlled Duffing oscillator by the pseudospectral method. J. Comput. Appl. Math. 1994, 56, 253–261. [Google Scholar] [CrossRef]
  15. Amer, Y.A.; El-Sayed, A.T.; Abd EL-Salam, M.N. Position and Velocity Time Delay for Suppression Vibrations of a Hybrid Ray-leigh-Van der Pol-Duffing Oscillator. Sound Vib. 2020, 54, 149–161. [Google Scholar]
  16. Marzban, H.R.; Razzaghi, M. Numerical solution of the controlled Duffing oscillator by hybrid functions. Appl. Math. Comput. 2003, 140, 179–190. [Google Scholar] [CrossRef]
  17. Shamsi, M.; Razzaghi, M. Numerical Solution of the Controlled Duffing Oscillator by the interpolating Scaling. J. Electromagn. Waves Appl. 2004, 18, 691–705. [Google Scholar] [CrossRef]
  18. Kafash, B.; Delavarkhalafi, A.; Karbassi, S.M. Application of Chebyshev polynomials to derive efficient algorithms for the solution of optimal control problems. Sci. Iran. 2012, 19, 795–805. [Google Scholar] [CrossRef]
  19. El-kady, M.; Elbarbary, E.M. A Chebyshev expansion method for solving nonlinear optimal control problems. Appl. Math. Comput. 2002, 129, 171–182. [Google Scholar] [CrossRef]
  20. Mahmood, T.S.; Lutfy, O.F. A Wavelet Neural Network-Based NARMA-L2 Feedforward Controller Using Genetic Algorithms to Control Nonlinear Systems. J. Eur. Des Syst. Autom. 2022, 55, 439–447. [Google Scholar] [CrossRef]
  21. Abbas, G.; Shihab, S. Operational Matrix of New Shifted Wavelet Functions for Solving Optimal Control Problem. Mathematics 2023, 11, 3040. [Google Scholar] [CrossRef]
  22. Eman, H.; Samaa, F.; Shihab, N. Boubaker Wavelets Functions: Properties and Applications. Baghdad Sci. J. 2021, 18, 1226–1233. [Google Scholar]
  23. Kafash, B.; Delavarkhalafi, A.; Karbassi, S.; Boubaker, M. A Numerical Approach for Solving Optimal Control Problems Using the Boubaker Polynomials Expansion Scheme. J. Interpolat. Approx. Sci. Comput. 2014, 3, 1–18. [Google Scholar] [CrossRef]
  24. Kafash, B.; Delavarkhalafi, A.; Karbassi, S.M. Numerical Solution of Nonlinear Optimal Control Problems Based on State Parameterization. Iran. J. Sci. Technol. 2012, 35, 331–340. [Google Scholar]
  25. Rad, J.A.; Kazem, S.; Parand, K. A numerical solution of the nonlinear controlled Duffing oscillator by radial basis functions. Comput. Math. Appl. 2012, 64, 2049–2065. [Google Scholar] [CrossRef]
  26. Rad, J.A.; Kazem, S.; Parand, K. Radial basis functions approach on optimal control problems: A numerical investigation. J. Vib. Control 2014, 20, 1394–1416. [Google Scholar] [CrossRef]
  27. El-Sayed, A.A. Pell-Lucas polynomials for numerical treatment of the nonlinear fractional-order Duffing equation. Demonstr. Math. 2023, 56, 20220220. [Google Scholar] [CrossRef]
  28. Khader, M.M.; Macías-Díaz, J.E.; Saad, K.M.; Hamanah, W.M. Vieta–Lucas Polynomials for the Brusselator System with the Rabotnov Fractional-Exponential Kernel Fractional Derivative. Symmetry 2023, 15, 1619. [Google Scholar] [CrossRef]
Figure 1. Graphs of the approximate solutions x t and u t using NSSGpolynomials and an exact solution with n = 4, 5, 6 and 7.
Figure 1. Graphs of the approximate solutions x t and u t using NSSGpolynomials and an exact solution with n = 4, 5, 6 and 7.
Symmetry 16 00915 g001
Figure 2. Graphs of the absolute errors for state x t and control u t with n = 7 .
Figure 2. Graphs of the absolute errors for state x t and control u t with n = 7 .
Symmetry 16 00915 g002
Figure 3. Graph of the absolute error between the NSSG polynomial solution and the exact solution of J with a different n .
Figure 3. Graph of the absolute error between the NSSG polynomial solution and the exact solution of J with a different n .
Symmetry 16 00915 g003
Table 1. Some special cases of G t .
Table 1. Some special cases of G t .
a b p q G ( t )
014−2 4 t 2
−1120 2 t
022−2 2 t 2
−2210 t
−2022 2 t + 2
a b 4 b a ( 2 a + b ) b a   2 2 t a b b a
Table 2. The optimal values of unknown parameters a * .
Table 2. The optimal values of unknown parameters a * .
a m n = 4 n = 5 n = 6 n = 7
a 0 0.087780070754717 0.087780070754717 0.0877779276967720.087777927696772
a 1 −0.125000000000000−0.124857719370861−0.124857719370861−0.124857905722070
a 2 0.039209905660377 0.039209905660377 0.0392580312876990.039258031287699
a 3 0 2.1342094370  ×   10 4 −2.134209437  ×   10 4 −2.144165008  ×   10 4
a 4 −0.001989976415094−0.001989976415094−0.002063119914508−0.002063119914508
a 5 7.1140314569  ×   10 5 7.114031456  ×   10 5 7.317227891  ×   10 5
a 6 2.716093003  ×   10 5 2.716093003  ×   10 5
a 7 −8.5005597850  ×   10 5
Table 3. The approximate and exact values of x ( t ) .
Table 3. The approximate and exact values of x ( t ) .
t n = 4 n = 5 n = 6 n = 7 Exact x
−2 0.500000000000 0.500000000000 0.5000000000000.5000000000000.500000000000
−1.8 0.400873584905 0.400637558415 0.4007427113790.4007460100120.400745520599
−1.6 0.306958490566 0.306399020367 0.3065319935660.3065323252220.306533016886
−1.4 0.222533962264 0.221891445707 0.2218752039910.2218694429300.221870390820
−1.2 0.150656603773 0.150237001124 0.1500235454330.1500173764670.150016904455
−1 0.093160377358 0.093160377358 0.0928592311290.0928592311290.092857866797
−0.8 0.050656603773 0.051076206422 0.0508627507310.0508689196960.050868512592
−0.6 0.022533962264 0.023176478820 0.0231602371030.0231659981640.023166956703
−0.4 0.006958490566 0.007517960764 0.0076509339630.0076506023070.007651246479
−0.20.000873584905 0.001109611395 0.0012147643590.0012114657270.001210986094
000000
Table 4. The approximate and exact values of u ( t ) .
Table 4. The approximate and exact values of u ( t ) .
t n = 4 n = 5 n = 6 n = 7 Exact u
−2 0.495283018867 0.477071098338 0.4885683246810.4891362134620.488967005484
−1.8 0.533703773584 0.53201079345 0.5291442712040.5289147331430.528980363594
−1.6 0.546769811320 0.55276663251 0.5480194386320.5479356623470.547904943902
−1.4 0.538760377358 0.546131105835 0.5448777911870.5450345469240.544986283110
−1.2 0.512732075471 0.517411810571 0.5201496594870.5203254223870.520340739015
−1 0.470518867924 0.470518867924 0.4748989056570.4748989056570.474950851685
−0.8 0.412732075471 0.408052340372 0.4107901892880.4106144263890.410626172692
−0.6 0.338760377358 0.331389648881 0.3301363342340.3299795784970.329931124000
−0.4 0.246769811320 0.240772990128 0.2360257962480.2361095725330.236082762552
−0.2 0.133703773584 0.135396753717 0.1325302314690.1327597695300.132822526363
0 0.004716981132 0.013494939397 0.0249921657410.0244242769600.024267075194
Table 5. Absolute and relative errors.
Table 5. Absolute and relative errors.
The MSSG Polynomial Method
n J A b s o l u t e   E r r o r s Relative Errors
40.1849168912848165.83492848160 × 10 5 3.15642892044  ×   10 4
50.1848735295692691.49875692690 × 10 5 8.10758816274  ×   10 5
60.1848585740123773.20123770269 × 10 8 1.73172289906  ×   10 7
70.1848585444502332.45023301648  ×   10 9 1.32546377893  ×   10 8
Radial Basis Functions [25]
40.2230319340205770.003817333.81732916  ×   10 2
60.1861000462395581.24150  ×   10 3 6.71596705  ×   10 3
80.1848672762038688.733840  ×   10 6 4.72460763  ×   10 5
100.1848585936840255.13202  ×   10 8 2.7802840  ×   10 7
Table 6. The approximate values of  x ( t ) and u ( t ) .
Table 6. The approximate values of  x ( t ) and u ( t ) .
x ( t ) u ( t )
t n = 10 [25] N S S G Polynomial n = 7 n = 10 [25] N S S G Polynomial n = 7
−2000.000000000.0000000000
−1.52.96999999  ×   10 6 1.0836221649  ×   10 6 0.001377930001.6921346  ×   10 4
−17.99999999  ×   10 8 1.3611290560  ×   10 6 3.2213000  ×   10 4 5.4912806  ×   10 5
−0.52.32000000  ×   10 6 1.054616982  ×   10 6 1.13700000  ×   10 5 5.1946027  ×   10 5
0002.46780000  ×   10 4 5.2213726  ×   10 5
Table 7. The absolute and relative errors with n = 7 .
Table 7. The absolute and relative errors with n = 7 .
Absolute ErrorRelative Error
t x ( t ) u ( t ) x ( t ) u ( t )
−201.692079  ×   10 4 03.460517746  ×   10 4
−1.84.89412962  ×   10 7 6.563045  ×   10 5 1.221256225  ×   10 6 1.240697283  ×   10 4
−1.66.91664531  ×   10 7 3.071844  ×   10 5 2.256411195  ×   10 6 5.606527252  ×   10 5
−1.49.47889511  ×   10 7 4.8263813  ×   10 5 4.272266828  ×   10 6 8.855968396  ×   10 5
−1.24.72011755  ×   10 7 1.5316628  ×   10 5 3.146390446  ×   10 6 2.943576555  ×   10 5
−11.36433170  ×   10 7 5.1946027  ×   10 5 1.469268837  ×   10 6 1.093713735  ×   10 4
005.2213726  ×   10 5 02.21516172  ×   10 4
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hussain, F.; Shihab, S. Transforming Controlled Duffing Oscillator to Optimization Schemes Using New Symmetry-Shifted G(t)-Polynomials. Symmetry 2024, 16, 915. https://doi.org/10.3390/sym16070915

AMA Style

Hussain F, Shihab S. Transforming Controlled Duffing Oscillator to Optimization Schemes Using New Symmetry-Shifted G(t)-Polynomials. Symmetry. 2024; 16(7):915. https://doi.org/10.3390/sym16070915

Chicago/Turabian Style

Hussain, Fatima, and Suha Shihab. 2024. "Transforming Controlled Duffing Oscillator to Optimization Schemes Using New Symmetry-Shifted G(t)-Polynomials" Symmetry 16, no. 7: 915. https://doi.org/10.3390/sym16070915

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop