**1. Introduction**

It is necessary to determine the maxima and minima of certain functionals in study problems in analysis, mechanics, and geometry. These problems are known as variational problems in calculus of variations. Variational problems have many applications in various fields like physics [1], engineering [2], and areas in which energy principles are applicable [3–5].

Nowadays, fractional calculus is a very interesting branch of mathematics. Fractional calculus has many real applications in science and engineering, such as fluid dynamics [6], biology [7], chemistry [8], viscoelasticity [9,10], signal processing [11], bioengineering [12], control theory [13], and physics [14]. Due to the importance of the fractional derivatives established through real-life applications, several authors have considered problems in calculus of variations by replacing the integer-order derivative with fractional orders in objective functionals, and this is thus known as fractional calculus of variations. Some of these studies are of a fractionally damped system [15], energy control for a fractional linear control system [16], a fractional model of a vibrating string [17], and an optimal control problem [18]. In this paper, our aim is to minimize non-linear fractional variational problems (NLFVPs) [19] of the following form:

$$J(\mathbf{y}) = \int\_0^1 \left( \mathbf{g}(\mathbf{x}) D^a \mathbf{y}(\mathbf{x}) + \mathbf{g}'(\mathbf{x}) I^{1-a} \mathbf{y}(\mathbf{x}) + h'(\mathbf{x}) \right)^2 d\mathbf{x} \tag{1}$$

under the constraints

$$y(0) = a, \qquad \qquad I^{1-a}y(1) = \epsilon,\tag{2}$$

where *<sup>g</sup>* and *<sup>h</sup>* are two functions of class *<sup>C</sup>*<sup>1</sup> with *<sup>g</sup>*(*x*) <sup>=</sup> 0 on [0, 1], *<sup>α</sup>* and are real numbers with *α* ∈ (0, 1), and *a* is a constant.

The pioneer approach for solving the fractional variational problems originates in reference [20] where Agrawal derived the formulation of the Euler-Langrage equation for fractional variational problems. Further, in reference [4], he gave a general formulation for fractional variational problems. In reference [5], the authors used an analytical algorithm based on the Adomian decomposition method (ADM) for solving problems in calculus of variations. In [21,22], Legendre orthonormal polynomials and Jacobi orthonormal polynomials, respectively, were used to obtain an approximate numerical solution of fractional optimum control problems. In [23], the Haar wavelet method was used to obtain numerical solution of these problems. Some other numerical methods for the approximate solution of fractional variational problems are given in [24–34]. Recently, in [19], the authors gave a new class of fractional variational problems and solved this using a decomposition formula based on Jacobi polynomials. The operational matrix methods (see [35–41]) have been found to be useful for solving problems in fractional calculus.

In present paper, we extend the Rayleigh-Ritz method together with operational matrices of different orthogonal polynomials such as Shifted Legendre polynomials, Shifted Chebyshev polynomials of the first kind, Shifted Chebyshev polynomials of the third kind, Shifted Chebyshev polynomials of the fourth kind, and Gegenbauer polynomials to solve a special class of NLFVPs. The Rayleigh-Ritz methods have been discussed by many researchers in the literature for different kinds of variational problems, i.e., fractional optimal control problems [18,21,22,32,33]; here we cite only few, and many more can be found in the literature. In this method, first we take a finite-dimensional approximation of the unknown function. Further, using an operational matrix of integration and the Rayleigh-Ritz method in the variational problem, we obtain a system of non-linear algebraic equations whose solution gives an approximate solution for the non-linear variational problem. Error analysis of the method for different orthogonal polynomials is given, and convergence of the approximate numerical solution to the exact solution is shown. A comparative study using absolute error and root-mean-square error tables for all five kinds of polynomials is analyzed. Numerical results are discussed in terms of the different values of fractional order involved in the problem and are shown through tables and figures.

#### **2. Basic Preliminaries**

The definition of fractional order integration in the Riemann-Liouville sense is defined as follows.

**Definition 1.** *The Riemann-Liouville fractional order integral operator is given by*

$$M^\mathfrak{a}f(\mathfrak{x}) = \begin{cases} \frac{1}{\Gamma(\alpha)} \int\_0^\chi (\mathfrak{x} - t)^{\mathfrak{a} - 1} f(t) dt, & \mathfrak{a} > 0, \mathfrak{x} > 0, \\\mathfrak{f}(\mathfrak{x}), & \mathfrak{a} = 0. \end{cases}$$

The analytical form of the shifted Jacobi polynomial of degree *i* on [0, 1] is given as

$$\Psi\_i(\mathbf{x}) = \sum\_{k=0}^i (-1)^{i-k} \frac{\Gamma(i+b+1)\Gamma(i+k+a+b+1)}{\Gamma(k+b+1)\Gamma(i+a+b+1)(i-k)!k!} \mathbf{x}^k \tag{3}$$

where *a* and *b* are certain constants. Jacobi polynomials are orthogonal in the interval [0, 1] with respect to the weight function *<sup>w</sup>*(*a*,*b*)(*x*) <sup>=</sup> (<sup>1</sup> <sup>−</sup> *<sup>x</sup>*) *a x<sup>b</sup>* and have the orthogonality property

$$\int\_0^1 \Psi\_n(\mathbf{x}) \Psi\_m(\mathbf{x}) w^{(a,b)}(\mathbf{x}) d\mathbf{x} = \upsilon\_n^{a,b} \delta\_{nm} \tag{4}$$

where *δmn* is the Kronecker delta function and

$$
\psi\_n^{a,b} = \frac{\Gamma(n+a+1)\Gamma(n+b+1)}{(2n+a+b+1)n!\Gamma(n+a+b+1)}.\tag{5}
$$

For certain values of the constants *a* and *b*, the Jacobi polynomials take the form of some well-known polynomials, defined as follows.

**Case 1:** Legendre polynomials (S1) For *a* = 0, *b* = 0 in Equation (3), we get Legendre polynomials.

$$\Psi\_{\bar{i}}(\mathbf{x}) = \sum\_{k=0}^{i} (-1)^{i-k} \frac{\Gamma(i+1)\Gamma(i+k+1)}{\Gamma(k+1)\Gamma(i+1)(i-k)!k!} \mathbf{x}^{k} \tag{6}$$

**Case 2:** Chebyshev polynomials of the first kind (S2) For *a* = <sup>1</sup> <sup>2</sup> , *<sup>b</sup>* <sup>=</sup> <sup>1</sup> <sup>2</sup> in Equation (3), we get Chebyshev polynomials of the first kind.

$$\Psi\_i(\mathbf{x}) = \sum\_{k=0}^i (-1)^{i-k} \frac{\Gamma\left(i + \frac{3}{2}\right) \Gamma(i+k+2)}{\Gamma(k + \frac{3}{2}) \Gamma(i+2) (i-k)! k!} \mathbf{x}^k \tag{7}$$

**Case 3:** Chebyshev polynomials of the third kind (S3) For *a* = <sup>1</sup> <sup>2</sup> , *<sup>b</sup>* <sup>=</sup> <sup>−</sup><sup>1</sup> <sup>2</sup> in Equation (3), we get Chebyshev polynomials of the third kind.

$$\Psi\_i(\mathbf{x}) = \sum\_{k=0}^i (-1)^{i-k} \frac{\Gamma\left(i + \frac{1}{2}\right) \Gamma(i+k+1)}{\Gamma\left(k + \frac{1}{2}\right) \Gamma(i+1) (i-k)! k!} \mathbf{x}^k \tag{8}$$

**Case 4:** Chebyshev polynomials of the fourth kind (S4) For *<sup>a</sup>* <sup>=</sup> <sup>−</sup><sup>1</sup> <sup>2</sup> , *<sup>b</sup>* <sup>=</sup> <sup>1</sup> <sup>2</sup> in Equation (3), we get Chebyshev polynomials of the fourth kind.

$$\Psi\_i(\mathbf{x}) = \sum\_{k=0}^i (-1)^{i-k} \frac{\Gamma\left(i + \frac{3}{2}\right) \Gamma(i+k+1)}{\Gamma(k+\frac{3}{2}) \Gamma(i+1) (i-k)! k!} \mathbf{x}^k \tag{9}$$

**Case 5:** Gegenbauer polynomials (S5) For *<sup>a</sup>* <sup>=</sup> *<sup>b</sup>* <sup>=</sup> *<sup>a</sup>* <sup>−</sup> <sup>1</sup> <sup>2</sup> in Equation (3), we get Gegenbauer polynomials.

$$\Psi\_i(\mathbf{x}) = \sum\_{k=0}^i (-1)^{i-k} \frac{\Gamma\left(i+a+\frac{1}{2}\right) \Gamma(i+k+2a)}{\Gamma\left(k+a+\frac{1}{2}\right) \Gamma(i+2a) (i-k)! k!} \mathbf{x}^k \tag{10}$$

A function *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2[0, 1] with <sup>|</sup> *<sup>f</sup>* (*t*)<sup>|</sup> <sup>≤</sup> *<sup>K</sup>* can be expanded as

$$f(t) = \lim\_{n \to \infty} \sum\_{i=0}^{n} c\_i \Psi\_i(t),\tag{11}$$

where *ci* = *f*(*t*), Ψ*i*(*t*) and −, − is the usual inner product space.

Equation (11) for finite-dimensional approximation is written as

$$f \stackrel{\simeq}{=} \sum\_{i=0}^{m} c\_i \Psi\_i(t) = \mathbb{C}^T \phi\_m(t), \tag{12}$$

.

where *C* and *φm*(*t*) are (*m* + 1) × 1 matrices given by *C* = [*c*0, *c*1,..., *cm*] *<sup>T</sup>* and *φm*(*t*) = [Ψ0, Ψ1,..., Ψ*m*] *T*.

**Theorem 1.** *Let H be a Hilbert space and Z be a closed subspace of H with dim Z* < ∞*; let* {*z*1, *z*2,..., *zN*} *be any basis for Z. Suppose that y is an arbitrary element in H and z*<sup>0</sup> *is the unique best approximation to y out of Z. Then*

$$\left\|y - z\_0\right\|\_2^2 = \frac{T(y; z\_1, z\_2, \dots, z\_N)}{T(z\_1, z\_2, \dots, z\_N)},$$

where

$$T(y; z\_1, z\_2, \dots, z\_N) = \begin{pmatrix} \langle y, y \rangle & \langle y, Z\_1 \rangle & \cdots & \langle y, Z\_N \rangle \\ \langle Z\_1, Z\_N \rangle & \langle Z\_1, Z\_1 \rangle & \cdots & \langle Z\_1, Z\_N \rangle \\ \cdot & \cdot & \vdots & \cdot \\ \cdot & \cdot & \cdot & \cdot \\ \cdot & \cdot & \cdot & \cdot \\ \cdot & \cdot & \vdots & \cdot \\ \langle Z\_N, y \rangle & \langle Z\_N, Z\_1 \rangle & \cdots & \langle Z\_N, Z\_N \rangle \end{pmatrix}$$

**Proof .** Please see references [42,43].

**Theorem 2.** *Suppose that fN*(*x*) *is the Nth approximation of the function f* <sup>∈</sup> *<sup>L</sup>*<sup>2</sup> *<sup>w</sup>*(*a*, *<sup>b</sup>*)[0, 1]*, and suppose*

$$S\_N(f) = \int\_0^1 [f(\mathbf{x}) - f\_N(\mathbf{x})]^2 w^{(a\_\star, b)}(\mathbf{x}) d\mathbf{x};$$

*then we have*

lim *N*→∞ *SN*(*f*) = 0.

**Proof .** Please see Appendix A.
