Next Article in Journal
Two-Dimensional Exponential Sparse Discriminant Local Preserving Projections
Next Article in Special Issue
Physics-Informed Neural Networks and Functional Interpolation for Solving the Matrix Differential Riccati Equation
Previous Article in Journal
Formal Matrix Rings: Isomorphism Problem
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Theory of Functional Connections Extended to Fractional Operators

1
Aerospace Engineering, Texas A&M University, College Station, TX 77845-3141, USA
2
Department of Mathematics, Università degli Studi di Bari “Aldo Moro”, 70125 Bari, Italy
3
GNCS Group, Istituto Nazionale di Alta Matematica (INdAM), 00185 Rome, Italy
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(7), 1721; https://doi.org/10.3390/math11071721
Submission received: 7 March 2023 / Revised: 28 March 2023 / Accepted: 30 March 2023 / Published: 4 April 2023
(This article belongs to the Special Issue Dynamics and Control Using Functional Interpolation)

Abstract

:
The theory of functional connections, an analytical framework generalizing interpolation, was extended and applied in the context of fractional-order operators (integrals and derivatives). The extension was performed and presented for univariate functions, with the aim of determining the whole set of functions satisfying some constraints expressed in terms of integrals and derivatives of non-integer order. The objective of these expressions was to solve fractional differential equations or other problems subject to fractional constraints. Although this work focused on the Riemann–Liouville definitions, the method is, however, more general, and it can be applied with different definitions of fractional operators just by changing the way they are computed. Three examples are provided showing, step by step, how to apply this extension for: (1) one constraint in terms of a fractional derivative, (2) three constraints (a function, a fractional derivative, and an integral), and (3) two constraints expressed in terms of linear combinations of fractional derivatives and integrals.

1. Introduction

During the past few decades, fractional calculus has been applied in different fields of science. Introductory surveys can be found in [1,2]. In engineering, it finds applications in fields such as control theory [3], the mechanics’ theory of viscoelasticity [4,5], in sedimentology with diffusion and transport in porous rocks (using fractional derivatives with the theory of fractals) [6], in (bio)chemistry with the modeling of polymers and proteins [7], in finance on stochastic computation with respect to fractional Brownian motion [8], and in medicine on the modeling of human tissue under mechanical loads [9], as well as in many other fields such as anomalous diffusion [10], anomalous convection [11], power laws [12], probability [13], optimal control [14], allometric scaling laws [15], long-range interactions [16], the description of galaxy rotation [17], market price dynamics [18], and so on.
The use of fractional derivatives and integrals (the term “fractional” must be actually intended as an extension to real values, rather than limited to integer values) in applied science and engineering has been dormant for a long time. There is no reason why the laws of the natural phenomena must be restricted to integer-order derivatives and integrals, and the idea to extend these operators to real orders is actually quite old, dating back to 1695 [19]. The reason for this delay is perhaps due to the fact that the various different approaches/definitions (with respect to the “boundary” constraints being coincident, in some specific domains, with the classical integer-order definitions) provide different results.
In addition, solving fractional-order problems turns out to be much more difficult than in the integer-order case, and sophisticated (analytical and numerical) methods are necessary; indeed, they are the subject of more and more ongoing investigations.
In this study, the theory of functional connections (TFC), a new mathematical framework generalizing interpolation using functionals [20], is extended for the first time to fractional derivatives and integrals. The capability to derive expressions representing all functions subject to fractional constraints (the main objective of this article) has a direct impact on solving fractional differential equations [21] and, in general, in any other problem subject to fractional constraints. In fact, this capability allows transforming these constrained optimization problems to unconstrained problems, which can then be solved using simpler, more robust, more accurate, faster, and more reliable methods. Therefore, the main aim of this article was to pave the way for devising advanced mathematical tools to solve fractional differential equations (FDEs) using the TFC.
In previous TFC applications, functional interpolation problems were solved by expanding the TFC free function in terms of orthogonal polynomials (Chebyshev, Legendre, etc.). Since fractional derivatives and integrals involve the computation of negative powers of the independent variable (a complex value for x < 0 ), shifted Chebyshev polynomials (SCPs), defined in the x [ 0 , 1 ] range, were adopted, and a brief description of SCPs and how to evaluate the derivatives is provided.
To achieve these goals, we first provide, in Section 2, a brief survey of the background materials on fractional calculus. Hence, Section 3 summarizes what the TFC is, where it has already be applied, and how it derives functionals representing all functions always satisfying a set of linear constraints. Shifted Chebyshev polynomials and their applications to represent derivatives of fractional order are discussed in Section 4. Finally, some numerical experiments to show the applications of the TFC with different constraints involving fractional integrals and fractional derivatives are presented. The Appendices at the end of the paper collect more technical details.

2. Background on Fractional Calculus

To allow the use of the TFC with fractional-order operators, it is necessary to preliminarily provide some basic material on fractional calculus.

2.1. The Gamma Function

The introduction of fractional-order operators requires recalling the Gamma function, Γ ( z ) , and some of its main properties. The Gamma function is defined as
Γ ( z ) = 0 x z 1 e x d x , z R , z > 0 ,
and it is analytically continuable in C \ { 0 , 1 , 2 , 3 , } .
The extension of the factorial definition using the Gamma function is supported by the Bohr–Mollerup theorem [22]: the Gamma function is indeed the unique function defined on ( 0 , ) satisfying the following properties:
  • Γ ( 1 ) = 1 ;
  • Γ ( z + 1 ) = z Γ ( z ) , for z > 0 ;
  • Γ ( z ) is logarithmically convex (or superconvex).
Property 2 implicitly states that Γ ( z + 1 ) = z ! when z N , and the similarity to the analogous property of the factorial, n ! = n ( n 1 ) ! , is evident.
Along with the definition given in (1), the Gamma function allows expressing factorials in terms of the Gamma function, namely n ! = Γ ( n + 1 ) when n N , and extending the factorial operator to non-integer values. For instance,
1 2 ! = Γ 1 2 + 1 = 0 x e x d x = π 2 .
This is also true for functions derived from the factorial, as for instance the binomial coefficient n k = n ! / ( k ! ( n k ) ! ) = Γ ( n + 1 ) / ( Γ ( k + 1 ) Γ ( n k + 1 ) ) , which can be, therefore, defined also for non-integer values as
α k = Γ ( α + 1 ) k ! Γ ( α k + 1 ) , α R \ { 1 , 2 , 3 , } .

2.2. Riemann–Liouville Fractional Integral

Cauchy’s formula allows representing the n-times repeated integration on [ a , x ] in terms of just one integral:
𝒥 a n [ f ( x ) ] = a x a t 1 a t n 1 f ( t n ) d t n d t n 1 d t 1 = 1 ( n 1 ) ! 0 x ( x t ) n 1 f ( t ) d t ,
an expression valid for n N . By replacing ( n 1 ) ! = Γ ( n ) , this formula holds for any n C , Re ( n ) > 0 . What is obtained is the Riemann–Liouville (RL) fractional integral:
𝒥 a α [ f ( x ) ] = 1 Γ ( α ) a x ( x t ) α 1 f ( t ) d t , α > 0 ,
which coincides with the usual definition of the integral operator when α is an integer. This RL fractional operator satisfies the following semigroup properties:
𝒥 a α + β [ f ( x ) ] = 𝒥 a α 𝒥 a β [ f ( x ) ] and d m d x m 𝒥 a α + m [ f ( x ) ] = 𝒥 a α [ f ( x ) ]
for real α , β > 0 and m N .
As in the integer-order case, RL fractional integration improves the smoothness properties of functions (see Appendix A for an explanation).

2.3. Riemann–Liouville Fractional Derivative

Different kinds of operators inverting the RL integral are possible. For a sufficiently smooth function f, the RL fractional derivative of order α > 0 is defined as
RL D a α [ f ( x ) ] : = d m d x m 𝒥 a m α [ f ( x ) ] = 1 Γ ( m α ) d m d x m a x ( x t ) m α 1 f ( t ) d t ,
where m = α . In view of the mentioned semigroup properties of 𝒥 a α , one can immediately check that RL D a α [ 𝒥 a α [ f ( x ) ] ] = f ( x ) .
The reverse composition of the RL derivative and integral instead involves initial conditions at x = a , namely [23] (Theorem 2.23)
𝒥 a α RL D a α f ( x ) = f ( x ) k = 0 m 1 ( x a ) α m + k Γ ( α m + k + 1 ) lim z a + d k d x k 𝒥 a m α [ f ( z ) ] ,
provided that 𝒥 a m α [ f ( x ) ] possesses a sufficient regularity.
There is a substantial difference with integer-order differential operators: the operator thus-obtained has a non-local character. This is known as the “memory effect”, and this property is more deeply explained in Appendix B.
The fractional derivative is a linear operator:
RL D a α [ k 1 f ( x ) + k 2 g ( x ) ] = k 1 RL D a α [ f ( x ) ] + k 2 RL D a α [ g ( x ) ] ,
and the RL derivative of power functions f ( x ) = ( x a ) β is given by
RL D a α [ ( x a ) β ] = Γ ( β + 1 ) Γ ( β α + 1 ) ( x a ) β α ,
thus showing that the RL derivative of a constant c is not 0, but RL D a α [ c ] = c Γ ( 1 α ) ( x a ) α . By setting a = 0 , we readily obtain the RL fractional derivative of monomials as
RL D 0 α [ x n ] = Γ ( n + 1 ) Γ ( n α + 1 ) x n α , n N ,
which is known as the fractional power rule. This expression is useful when expanding a function in Taylor series or in terms of orthogonal polynomials as, for instance, the shifted Chebyshev polynomials.
In some cases (see [23] (Theorem 2.3)), the RL derivative may satisfy a subsequent derivation property such as
RL D a α RL D β [ f ( x ) ] = RL D a β RL D α [ f ( x ) ] = RL D a α + β [ f ( x ) ] ,
which allows re-conducting fractional derivatives in the restricted range α ( 0 , 1 ) ; indeed, when β ( m , m + 1 ) , m N , one can find α ( 0 , 1 ) such that β = m + α and write
RL D a β [ f ( x ) ] = RL D a m + α [ f ( x ) ] = d m d x m D a α [ f ( x ) ] ,
as for instance in
RL D a 5.7 [ f ( x ) ] = RL D a 5 + 0.7 [ f ( x ) ] = d 5 d x 5 RL D a 0.7 [ f ( x ) ]
(the use of (4) must be done, however, with caution since it does not hold for any function f and for any pair of orders α and β ). On the basis of this observation, we restricted our investigation to the range α ( 0 , 1 ) .

2.4. Caputo Fractional Derivative

The RL derivative is not the only operator performing the inversion of the RL integral 𝒥 a α [ f ( x ) ] . A further definition is known as the Caputo fractional derivative:
C D a α [ f ( x ) ] : = 𝒥 a m α d m d x m f ( x ) = 1 Γ ( m α ) a x ( x t ) m α 1 f ( m ) ( t ) d t ,
and also in this case, it is easy to check that C D a α [ 𝒥 a α [ f ( x ) ] ] = f ( x ) .
The Caputo derivative requires a stronger regularity of the function f, but its left inversion by means of the RL integral involves initial conditions expressed in terms of integer-order derivatives since [23] (Theorem 2.3)
𝒥 a α C D a α f ( x ) = f ( x ) k = 0 m 1 ( x a ) k k ! d k d x k f ( a ) .
This property is of importance in fractional differential equations since it implies that initial-value problems with the Caputo fractional derivative are initialized by values of the integer-order derivative of the solution (which are standard in physical problems) and not by the limits of fractional-order integrals as with the RL derivative. However, the two derivatives are strictly connected since
C D a α f ( x ) = R L D a α f ( x ) k = 0 m 1 ( x a ) k α Γ ( k α + 1 ) d k d x k f ( a ) ,
and therefore, focusing on just one of them does not appear too restrictive. In this preliminary work, our attention was mainly devoted to the RL definition.

2.5. Grünwald–Letnikov Definitions

It is worthwhile to mention the Grünwald–Letnikov (GL) definition of fractional derivatives [24], which provides discrete access to the fractional calculus, and it is particularly suitable for numerical treatment. Indeed, the GL derivative is defined as
G L D a α f ( x ) = lim h 0 + 1 h α j = 0 x a h ( 1 ) j α j f ( x j h ) ,
where the binomial coefficient α j must be intended as in (2). It is immediate to see that G L D a α allows a numerical approximation once a constant (possibly small) step h > 0 , rather than the limit as h 0 + , is selected. Reference [24] analyzed the numerical stability, convergence, and error of FDEs using the GL derivative, where the asymptotic and the absolute stability of these methods were proven.
The GL definition is obtained from a generalization of the definition of the integer-order derivative expressed as the limit of the difference quotient and after imposing, in order to ensure the convergence of the series, that the function has a value of 0 before the initial point a (see, for instance, [25]). Under the necessary assumptions on the smoothness of f, it is possible to prove an equivalence between the GL and RL definitions.
A integral of the GL type can be defined as well after considering a negative-order α in G L D a α , i.e.,
G L 𝒥 a α f ( x ) = lim h 0 + h α j = 0 x a h ( 1 ) j α j f ( x j h ) ,
which instead corresponds to the RL integral initialized at a.

3. Background on the Theory of Functional Connections

The theory of functional connections (TFC) performs linear functional interpolation, and functional interpolation is a generalization of interpolation. Instead of selecting a function (or a class of functions) satisfying a set of constraints, functional interpolation derives functionals (called constrained expressions) representing the whole set of functions satisfying the constraints; stated in a different way, TFC derives functionals that always satisfy the whole set of constraints (the constraints are embedded in the functional). In this way, functional interpolation identifies the subset of the function space that fully satisfies the constraints.
These functionals contain a free function, g ( x ) , and by spanning all possible expressions of the free function, the whole space of functions satisfying the constraints is covered. In particular, the free function always appears linear in the TFC, thus simplifying optimization processes. This approach was introduced in [26], and then, immediate applications appeared for solving differential equations [27,28,29,30] and for other mathematical problems [31,32,33,34].
To give a simple example of what functional interpolation is, consider the functional:
f ( x , g ( x ) ) = g ( x ) + 2 g ( 3 ) 2 g ( π ) + ( 9 π 2 ) ( 1 g ˙ ( 1 ) ) 2 π π 2 + 15 x + + g ( π ) g ( 3 ) + ( π + 3 ) ( 1 g ˙ ( 1 ) ) 2 π π 2 + 15 x 2
where g ˙ denotes the derivative of g. This equation always satisfies the two constraints, f ( 3 ) = f ( π ) and f ˙ ( 1 ) = 1 , regardless of the function g ( x ) . The function g ( x ) is a free function subject to being defined, where the constraints are specified, only. Functionals such as (5) represent the whole set of functions satisfying the set of constraints they are derived for. These functionals project the whole space of functions to just the subspace fully satisfying the constraints. This way, constrained optimization problems, such as differential equations, can be transformed to unconstrained problems and, consequently, be solved using simpler, faster, more robust, and more accurate methods. Univariate functionals such as (5) can be generated by either one of these two formal expressions, called constrained expressions (CEs):
f ( x , g ( x ) ) = g ( x ) + j = 1 n η j ( x , g ( x ) ) s j ( x )
f ( x , g ( x ) ) = g ( x ) + j = 1 n ϕ j ( x , s ( x ) ) ρ j ( x , g ( x ) )
where n is the number of linear constraints, g ( x ) is the free function, s j ( x ) is a set of n user-defined linearly independent support functions, η j ( x , g ( x ) ) are coefficient functionals, ϕ j ( x ) are switching functions (they are 1 when evaluated at the constraint they reference and 0 when evaluated at all other constraints), and ρ j ( x , g ( x ) ) are projection functionals representing the constraints written in terms of the free function.
Numerically efficient applications of the TFC have already been implemented in optimization problems, outperforming the current methods, especially in solving differential equations. In this area, the TFC has unified initial, boundary, and multi-value problems by usually providing fast solutions with high accuracy.

A Simple Explanatory Example

Let us consider the problem of determining functions f satisfying the initial condition:
D 0 α [ f ( x ) ] x 0 = f 0 ,
where, to adopt a more concise notation, from now on, D 0 α will denote the fractional RL derivative of order α , previously identified as R L D 0 α , and D 0 α [ f ( x ) ] x 0 its value at x = x 0 . Using the “ η ” formulation (6), the constrained expression is
f ( x , g ( x ) ) = g ( x ) + η x p .
After imposing (8) at x = x 0 , the η coefficient can be derived as
η = f 0 D 0 α [ g ( x ) ] x 0 Γ ( p α + 1 ) Γ ( p + 1 ) x 0 α p
and the constrained expression becomes
f ( x , g ( x ) ) = g ( x ) + f 0 D 0 α [ g ( x ) ] x 0 Γ ( p α + 1 ) Γ ( p + 1 ) x 0 α p x p .
This constrained expression represents the whole set of functions satisfying the given constraint (8). It can be used, for instance, in a constrained optimization process, subject to satisfying (8). By means of (10), an initial constrained optimization problem can be transformed to the unconstrained problem of finding the expression of the free function, namely g ( x ) , generating the optimal f ( x ) [20].
The most-simple function satisfying the constraint (8) is obtained from (10) after setting g ( x ) = 0 :
f ( x ) = f 0 Γ ( p α + 1 ) Γ ( p + 1 ) x 0 α p x p ,
and indeed, it is easy to check that
D 0 α [ f ( x ) ] x 0 = f 0 Γ ( p α + 1 ) Γ ( p + 1 ) x 0 α p Γ ( p + 1 ) Γ ( p α + 1 ) x 0 p α = f 0 .
In Figure 1, we show, for three different choices of the free function g, namely g ( x ) = x 2 , g ( x ) = e x / 2 , and g ( x ) = cos ( 2 x ) , the plots of f ( x , g ( x ) ) (left plot) and of its RL derivative D 0 α f ( x , g ( x ) ) (right plot). We imposed the constraint (8) with α = 0.8 , x 0 = 1 , and f 0 = 1 , and we considered the constrained expression (9) with p = 2 . Although we obtained three different functionals f ( x , g ( x ) ) , all their RL satisfy the constraint (8): indeed, in the right plot, we observe that D 0 α [ f ( x ) ] x 0 = f 0 . For the evaluation of the RL derivatives, we used the formulas in Appendix A.

4. Shifted Chebyshev Polynomials

Chebyshev polynomials of the first kind are orthogonal polynomials defined for t [ 1 , 1 ] as T k ( t ) = cos k arccos t , k = 0 , 1 , . They can be built in different ways, such as, for instance, by means of the three-term recurrence relation:
T k + 1 ( x ) = 2 t T k ( t ) T k 1 ( t ) starting with : T 0 ( t ) = 1 , T 1 ( t ) = t .
To operate with fractional-order operators initialized at some points x 0 , it is convenient to use shifted Chebyshev polynomials (SCPs) preserving orthogonality on the shifted interval. Appendix C provides a summary of various approaches to perform the least squares. They can be adopted in optimization to estimate the coefficients of the free function, which is expanded in terms of shifted Chebyshev polynomials.
To simplify the notation, we considered the fractional integral and the fractional RL derivative initialized at x = 0 , and we assumed we are interested in approximating functions on the interval [ 0 , 1 ] . The generalization to different intervals is always possible without particular difficulties.
SCPs on [ 0 , 1 ] are connected to Chebyshev polynomials by the relationship:
S k ( t ) = T k ( 2 t 1 ) where t [ 0 , 1 ]
and the corresponding three-term recurrence relation is
S k + 1 ( t ) = ( 4 t 2 ) S k ( t ) S k 1 ( t ) starting with : S 0 ( t ) = 1 , S 1 ( t ) = 2 t 1 .
In order to expand free functions in the TFC, it is convenient to express SCPs in terms of monomials:
S k ( t ) = j = 0 k a k , j t j ,
where coefficients a k , j , j = 0 , 1 , , k , mapping SCPs to monomials, are obtained from the recursive form (11) of SCPs; a closed form can be obtained, for instance, from [35] as
a 0 , 0 = 1 , a k , j = k 2 = 0 k j 2 k 2 k j 2 ( 1 ) k j 2 k + j 2 ( k 1 ) ! ! ( k 2 ) ! .
Therefore, it is possible to represent the SCPs up to degree m as
S 0 ( t ) S 1 ( t ) S m ( t ) = A t 0 t 1 t m , A = a 00 0 0 a 10 a 11 0 a m 0 a m 1 a m m
and the first few coefficients (for m = 5 ) can be easily evaluated as
A = 1 0 0 0 0 0 1 2 0 0 0 0 1 8 8 0 0 0 1 18 48 32 0 0 1 32 160 256 128 0 1 50 400 1120 1280 512 .
The matrix expression of the SCPs allows us to provide the RL fractional derivatives of the SCPs after using (3) for the RL derivatives of monomials:
RL D 0 α S 0 ( t ) S 1 ( t ) S m ( t ) = A Γ ( 0 + 1 ) Γ ( 0 α + 1 ) t 0 α Γ ( 1 + 1 ) Γ ( 1 α + 1 ) t 1 α Γ ( m + 1 ) Γ ( m α + 1 ) t m α = A D α t 0 α t 1 α t m α = t α A D α t 0 t 1 t m
where D α is a diagonal matrix made of Γ functions ratio terms
D α = Γ ( 1 ) Γ ( 1 α ) 0 0 0 Γ ( 2 ) Γ ( 2 α ) 0 0 0 Γ ( m + 1 ) Γ ( m + 1 α ) .
It is well known that integer-order derivatives of SCPs can expressed in terms of SCPs themselves according to
S ˙ ( t ) = S ˙ 0 ( t ) S ˙ 1 ( t ) S ˙ m ( t ) = b 00 0 0 b 10 b 11 0 b m 0 b m 1 b m m S 0 ( t ) S 1 ( t ) S m ( t ) = B S ( t )
and
S ¨ ( t ) = S ¨ 0 ( t ) S ¨ 1 ( t ) S ¨ m ( t ) = c 00 0 0 c 10 c 11 0 c m 0 c m 1 c m m S 0 ( t ) S 1 ( t ) S m ( t ) = C S ( t ) ,
where matrices B and C can be easily built in a recursive way. Indeed, by using the expression of their first column:
b k , 1 = ( 1 ) k 2 ( k 1 ) and c k , 1 = 4 ( k 1 ) [ 1 k mod ( 2 ) ] , k = 1 , 2 , ,
one can build the following columns according to
b k + i , k + i = 2 b k , 1 and c k + i , k + i = 2 c k , 1 , i = 1 , 2 , .
For instance, for m = 5 , the B and C lower-triangular matrices are
B = 0 0 0 0 0 0 2 0 0 0 0 0 4 4 0 0 0 0 6 8 4 0 0 0 8 12 8 4 0 0 10 16 12 8 4 0 and C = 0 0 0 0 0 0 0 0 0 0 0 0 8 0 0 0 0 0 0 16 0 0 0 0 16 0 16 0 0 0 0 32 0 16 0 0
To obtain a similar relationship with RL derivatives, we first observe that the inverse mapping of (12) is also available, namely
t k = j = 0 k a ¯ k , j S j ( t ) ,
where coefficients a ¯ k , j , j = 0 , 1 , , k are given (after simple manipulations) by [35]
a ¯ k , j = C k 2 n + k j = 0 j even n k k + j j / 2 n k + j 2 j , C k = 1 k = 0 , 2 k > 0 ,
Thus, we write
t 0 t 1 t m = A ¯ S 0 ( t ) S 1 ( t ) S m ( t ) , A ¯ = a ¯ 00 0 0 a ¯ 10 a ¯ 11 0 a ¯ m 0 a ¯ m 1 a ¯ m m .
We note that A ¯ = A 1 , and therefore, thanks to (13) one obtains
RL D 0 α S 0 ( t ) S 1 ( t ) S m ( t ) = t α A D α A 1 S 0 ( t ) S 1 ( t ) S m ( t ) .

Example

Let us consider, on the interval [ 0 , 1 ] , the following function:
g ( t ) = 1 2 t 2 1 2
and its approximation g m ( t ) g ( t ) obtained by the first m + 1 terms in the shifted Chebyshev expansion, namely
g m ( t ) = ξ m T S m ( t ) where S m ( t ) = S 0 ( t ) S 1 ( t ) S m ( t ) , ξ m = ξ 0 ξ 1 ξ m ,
where coefficients ξ k can be easily evaluated after computing (by some accurate quadrature rule) the integrals:
ξ k = 4 π c k 0 1 g ( t ) S k ( t ) 1 ( 2 t 1 ) 2 d t
with c 0 = 2 and c k = 1 for any k 1 . By using (14), we are, hence, able to evaluate
RL D 0 α g m ( t ) = ξ m T RL D 0 α S m ( t ) = ξ α , m T t α S m ( t ) , ξ α , m T = ξ m T A D α A 1 R m ,
which provides an approximation to RL D 0 α g ( t ) in terms of shifted Chebyshev polynomials.
In the left plot of Figure 2, we show the function g ( t ) and the error | g ( t ) g m ( t ) | for some values of m. The RL derivative of g m ( t ) , where m = 12 , is instead presented in the right plot of Figure 2 for α { 0.5 , 0.7 , 0.9 } .

5. Numerical Examples

This section shows, by three distinct examples, how to apply the TFC to obtain functionals, called constrained expressions, with embedded fractional constraints. The first example involves, again, a single fractional derivative constraint that is solved using the switching projection TFC formulation, while the second examples shows how to derive, via the TFC η formulation [20], the constrained expression subject to three constraints specified by the values of the function, of its fractional derivative, and of its fractional integral. The third example is devoted to presenting the use of the switching-projection formulation [20] to derive the constrained expression for two constraints defined as a linear combination of fractional constraints specified in distinct locations.

5.1. Single Fractional Constraint

Let us derive, by the switching-projection TFC formulation (7), the functional representing all functions satisfying the single fractional derivative constraint:
D 0 1 / 3 [ f ( x ) ] | 2 = π
The number of constraints is n = 1 . Therefore, just one support function and one switching function are needed. Let s ( x ) = x 2 be the support function. The switching function is ϕ ( x ) = ξ s ( x ) = ξ x 2 , and it is subject to
D 0 1 / 3 [ ϕ ( x ) ] | 2 = ξ D 0 1 / 3 [ x 2 ] | 2 = ξ Γ ( 3 ) 2 5 / 3 Γ ( 8 / 3 ) = 1 ,
from which the unknown coefficient ξ = Γ ( 8 / 3 ) Γ ( 3 ) 2 5 / 3 is computed. Therefore, the expressions of the switching function and of the projection functional are
ϕ ( x ) = Γ ( 8 / 3 ) Γ ( 3 ) 2 5 / 3 x 2 and ρ ( x , g ( x ) ) = π D 1 / 3 [ g ( x ) ] | 2
and the constrained expression, f ( x , g ( x ) ) = g ( x ) + ϕ ( x ) ρ ( g ( x ) ) , representing the whole set of functions satisfying (15), is
f ( x , g ( x ) ) = g ( x ) + Γ ( 8 / 3 ) Γ ( 3 ) 2 5 / 3 x 2 π D 0 1 / 3 [ g ( x ) ] | 2
Note that, by setting g ( x ) = 0 , the simplest interpolation result is obtained, i.e.,
D 0 1 / 3 [ f ( x ) ] | 2 = Γ ( 8 / 3 ) Γ ( 3 ) 2 5 / 3 π D 0 1 / 3 [ x 2 ] | 2 = Γ ( 8 / 3 ) Γ ( 3 ) 2 5 / 3 π Γ ( 3 ) 2 5 / 3 Γ ( 8 / 3 ) = π .
Equation (16) can also be validated using any expression of the free function. For example, by setting g ( x ) = c x + d cos x , thanks to (A2) and (A8), we obtain
D 0 1 / 3 [ g ( x ) ] | 2 = D 0 1 / 3 [ c x + d cos x ] | 2 = c Γ ( 2 ) Γ ( 5 / 3 ) 2 2 / 3 + d 2 α E 2 , 1 α ( 2 2 ) = c + d
and hence,
D 0 1 / 3 [ f ( x ) ] | 2 = D 0 1 / 3 g ( x ) + Γ ( 8 / 3 ) Γ ( 3 ) 2 5 / 3 x 2 π D 0 1 / 3 [ g ( x ) ] | 2 | 2 = = c + d + Γ ( 8 / 3 ) Γ ( 3 ) 2 5 / 3 π c d D 0 1 / 3 x 2 | 2 = = c + d + Γ ( 8 / 3 ) Γ ( 3 ) 2 5 / 3 π c d Γ ( 3 ) 2 5 / 3 Γ ( 8 / 3 ) = π .

5.2. Three Mixed Constraints

This example includes n = 3 constraints, the function value, fractional derivative, and fractional integral constraints:
f ( x ) | 1 = 3 , D 0 3 / 8 [ f ( x ) ] | 2 = 1 , and 𝒥 0 4 / 5 [ f ( x ) ] | 2 = 1 .
For these constraints, the constrained expression is derived using the η formulation (6). Let the support functions be s ( x ) = { 1 , x , x 2 } . Then, the constraints imply
3 g ( x ) | 1 1 D 0 3 / 8 [ g ( x ) ] | 2 1 𝒥 0 4 / 5 [ g ( x ) ] | 2 = s 1 ( 1 ) s 2 ( 1 ) s 3 ( 1 ) D 0 3 / 8 [ s 1 ( x ) ] | 2 D 0 3 / 8 [ s 2 ( x ) ] | 2 D 0 3 / 8 [ s 3 ( x ) ] | 2 𝒥 0 4 / 5 [ s 1 ( x ) ] | 2 𝒥 0 4 / 5 [ s 2 ( x ) ] | 2 𝒥 0 4 / 5 [ s 3 ( x ) ] | 2 η 1 η 2 η 3
By inverting the matrix, the η j coefficients can be computed in terms of the free functions:
η 1 η 2 η 3 = 1 1 1 ( 2 π ) 1 / 2 2 5 / 8 Γ ( 13 / 8 ) 2 29 / 8 Γ ( 21 / 8 ) 2 4 / 5 Γ ( 9 / 5 ) 2 9 / 5 Γ ( 14 / 5 ) 2 · 2 14 / 5 Γ ( 19 / 5 ) 1 3 g ( x ) | 1 1 D 0 3 / 8 [ g ( x ) ] | 2 1 𝒥 0 4 / 5 [ g ( x ) ] | 2
whose approximated solution is
η 1 55.3507 3 g ( x ) | 1 3.9464 1 + D 0 3 / 8 [ g ( x ) ] | 2 29.9166 1 𝒥 0 4 / 5 [ g ( x ) ] | 2 η 2 64.9326 3 g ( x ) | 1 + 4.8673 1 + D 0 3 / 8 [ g ( x ) ] | 2 + 35.7737 1 𝒥 0 4 / 5 [ g ( x ) ] | 2 η 3 10.5818 3 g ( x ) | 1 0.9208 1 + D 0 3 / 8 [ g ( x ) ] | 2 5.8572 1 𝒥 0 4 / 5 [ g ( x ) ] | 2
Then, all functions simultaneously satisfying the constraints given in (17) can be represented by the constrained expression:
f ( x , g ( x ) ) = g ( x ) + η 1 + η 2 x + η 3 x 2

5.3. Two Linear Combinations of Fractional Constraints

This example is provided to show how to proceed with a multiple linear combination of constraints. Let the constraints be
π f ( 1 ) 3 D 0 3 / 4 [ f ( x ) ] | 3 + 2 𝒥 0 3 / 8 [ f ( x ) ] | 2 = 5 , 2 d 2 f ( x ) d x 2 | 1 + D 0 4 / 5 [ f ( x ) ] | 3 + 2 𝒥 0 5 / 8 [ f ( x ) ] | 2 = 0 .
The constrained expression of these two constraints is derived using the TFC switching-projection formulation given in (7). Using the support functions s ( x ) = { e x , x 3 } , the switching functions are expressed in terms of the α i j coefficients:
ϕ 1 ( x ) = α 11 e x + α 21 x 3 and ϕ 2 ( x ) = α 12 e x + α 22 x 3 .
To better clarify, let us write the constraints (18) as
C 1 ( f ) = 5 , C 1 ( f ) = π f ( 1 ) 3 D 0 3 / 4 [ f ( x ) ] | 3 + 2 𝒥 0 3 / 8 [ f ( x ) ] | 2 , C 2 ( f ) = 0 , C 2 ( f ) = 2 d 2 f ( x ) d x 2 | 1 + D 0 4 / 5 [ f ( x ) ] | 3 + 2 𝒥 0 5 / 8 [ f ( x ) ] | 2 ,
and express the switching conditions as
C i ( ϕ j ) = 1 if i = j 0 if i j .
Therefore, in view of the linearity of ϕ 1 and ϕ 2 , the switching conditions are given by
α 11 C 1 ( e x ) + α 21 C 1 ( x 3 ) α 11 C 2 ( e x ) + α 21 C 2 ( x 3 ) α 12 C 1 ( e x ) + α 22 C 1 ( x 3 ) α 12 C 2 ( e x ) + α 22 C 2 ( x 3 ) = 1 0 0 1 .
After evaluating constraints C 1 and C 2 at the support functions e x and x 3 :
C 1 ( e x ) = π e 3 D 0 3 / 4 [ e x ] | 3 + 2 𝒥 0 3 / 8 [ e x ] | 2 C 1 ( x 3 ) = π 3 D 0 3 / 4 [ x 3 ] | 3 + 2 𝒥 0 3 / 8 [ x 3 ] | 2 C 2 ( e x ) = 2 e + D 0 4 / 5 [ e x ] | 3 + 2 𝒥 0 5 / 8 [ e x ] | 2 C 2 ( x 3 ) = 12 + D 0 4 / 5 [ x 3 ] | 3 + 2 𝒥 0 5 / 8 [ x 3 ] | 2
coefficients α i j are computed by the simple matrix inversion [20]:
α 11 α 21 α 12 α 22 = C 1 ( e x ) C 2 ( e x ) C 1 ( x 3 ) C 2 ( x 3 ) 1 0.06460 0.05044 0.08684 0.04797
and the switching functions can be approximated as
ϕ 1 ( x ) = 0.06460 e x 0.05044 x 3 and ϕ 2 ( x ) = 0.08684 e x 0.04797 x 3 .
The projection functionals are [20]
ρ 1 ( x , g ( x ) ) = 5 π g ( 1 ) + 3 D 3 / 4 [ g ( x ) ] | 3 2 𝒥 3 / 8 [ g ( x ) ] | 2 ρ 2 ( x , g ( x ) ) = 2 d 2 g ( x ) d x 2 | 1 D 4 / 5 [ g ( x ) ] | 3 2 𝒥 5 / 8 [ g ( x ) ] | 2
and by using the expressions provided by (19) and (20), the constrained expression representing all functions satisfying the constraints given in (18) is
f ( x , g ( x ) ) = g ( x ) + ϕ 1 ( x ) ρ 1 ( x , g ( x ) ) + ϕ 2 ( x ) ρ 2 ( x , g ( x ) )
The constrained expressions must be validated by showing that they satisfy all constraints, no matter what the free function is. In particular, by selecting g ( x ) = 0 , the functional interpolation problem is transformed into a simple interpolation problem that uses the support functions selected.
Equation (21) satisfies the constraints given in (18) for any free function. Let us prove it by selecting (for simplicity) g ( x ) = c x 2 , where c is an unknown constant. Using this free function, the projection functionals become
ρ 1 ( x , g ( x ) ) = 5 π c + 3 c D 3 / 4 [ x 2 ] | 3 2 c 𝒥 3 / 8 [ x 2 ] | 2 5 + 10.6189 c ρ 2 ( x , g ( x ) ) = 4 c c D 4 / 5 [ x 2 ] | 3 2 c 𝒥 5 / 8 [ x 2 ] | 2 17.2358 c ,
and using the approximated expression of the switching functions and the projection functionals, the constrained expression becomes
f ( x , c x 2 ) c x 2 ( 0.8107 c 0.3230 ) e x + x 3 ( 0.2912 c 0.2522 ) .
Thus, by replacing this expression in the constraints, we obtain the residuals:
R 1 ( f ) = C 1 ( f ) 5 , R 2 ( f ) = C 2 ( f )
which have values at the machine error level, as we show in Figure 3, where R 1 ( f ) and R 2 ( f ) are plotted as a function of c [ 0 , 1 ] (note that all available digits provided by Matlab were used for the coefficients and not just the few digits displayed in (19) and (22)).

6. Discussion

This work extended the application of the theory of functional connections (TFC), an analytical framework to derive functionals representing all functions interpolating a set of constraints, to constraints made of fractional-order operators (derivatives and integrals). The TFC has been developed for constraints defined by points, integer derivatives and integrals, limits, components, and any linear combination of them for the univariate and the multivariate case [20]. In this article, constraints made of fractional derivatives and fractional integrals for the univariate case were included in the TFC framework. Although the extension was presented for the Riemann–Liouville definition of the fractional derivative and integral, this choice is actually not restrictive, since the method presented can be used, as it is, for other definitions; the only difference is in the corresponding way to compute the fractional operators.
The representation of all functions’ interpolating set by the TFC for some linear constraints was obtained by deriving analytical functionals, called constrained expressions, containing a free function, g ( x ) . No matter what g ( x ) is, the constrained expression analytically satisfies the constraints, and in addition, by spanning all possible expressions of g ( x ) , the whole set of functions interpolating the constraints was obtained.
The extension was validated by three illustrative examples.

Author Contributions

Conceptualization, formal analysis, methodology, software, writing: D.M., R.G., and L.N.; supervision, D.M. and R.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
TFCTheory of functional connections
SCPShifted Chebyshev polynomials

Appendix A. Some Fractional Integrals and Derivatives with Closed-Form Expressions

Here, we provide closed-form expressions for fractional integrals and derivatives of some elementary functions. We refer to [25,36] for further results on this subject. Note that all operators are initialized at x = 0 . Moreover, β is any real number > 1 (possibly an integer) and c , ω R .
𝒥 0 α [ x β ] = Γ ( β + 1 ) Γ ( β + 1 + α ) x β + α
RL D 0 α [ x β ] = 0 β α α , α α + 1 , , α 1 Γ ( β + 1 ) Γ ( β + 1 α ) x β α otherwise
𝒥 0 α [ e c x ] = x α E 1 , 1 + α ( c x ) = e c x c α Γ ( α ) γ ( α , c x )
RL D 0 α [ e c x ] = x α E 1 , 1 α ( c x )
𝒥 0 α [ sin ( ω x ) ] = ω x 1 + α E 2 , 2 + α ( ω 2 x 2 )
RL D 0 α [ sin ( ω x ) ] = ω x 1 α E 2 , 2 α ( ω 2 x 2 )
𝒥 0 α [ cos ( ω x ) ] = x α E 2 , 1 + α ( ω 2 x 2 )
RL D 0 α [ cos ( ω x ) ] = x α E 2 , 1 α ( ω 2 x 2 )
with γ ( s , x ) and E α , β ( x ) denoting, respectively, the lower incomplete gamma function:
γ ( s , x ) = 0 x u s 1 e u d u
and the Mittag–Leffler function:
E α , β ( x ) = k = 0 x k Γ ( α k + β ) .
The fractional derivatives of trigonometric functions are useful when representing a function by Fourier series.

Appendix B. Non-Locality of Fractional Operators (Memory Effect)

This Appendix explains the smoothness properties and the “memory effect” of the fractional integral. Let us recall the definition of the Holder space, H μ [ a , b ] , with order μ :
H μ [ a , b ] : = f : [ a , b ] R ; c > 0 , x , y [ a , b ] : | f ( x ) f ( y ) | c | x y | μ
The condition, which can also be defined for functions between metric spaces, generalizes the Lipschitzianity (if μ = 1 ) that characterizes a function that has limited growth, in the sense that the ratio between the variation of the ordinate and the variation of the abscissa can never exceed a fixed value, called the Lipschitz constant. It is a stronger condition than continuity. If μ = 0 , this condition reduces to the boundedness of the function.
Theorem A1.
Let ϕ H μ [ a , b ] for some μ [ 0 , 1 ] and 0 < α < 1 . Therefore [23],
𝒥 a α ϕ ( x ) = ϕ ( a ) Γ ( α + 1 ) ( x a ) α + Φ ( x )
for some Φ. This function Φ satisfies
Φ ( x ) = O ( ( x a ) μ + α )
for x a , where O ( · ) indicates that Φ ( x ) is the same order of the infinitesimal of ( x a ) μ + α for x a . Furthermore,
Φ H μ + α [ a , b ] if μ + α < 1 , H * [ a , b ] if μ + α = 1 , H 1 [ a , b ] if μ + α > 1 .
Theorem A2.
Let α > 0 , p > max { 1 , 1 / α } , ϕ L p [ a . b ] . Then,
𝒥 a α ϕ ( x ) = o ( x a ) α 1 / p
for x a + , where o ( · ) indicates that 𝒥 a α Φ ( x ) is an infinitesimal of higher order of ( x a ) α 1 / p for x a + . If also α 1 / p N , so 𝒥 a α ϕ C α 1 / p [ a , b ] and D α 1 / p 𝒥 a α ϕ H α 1 / p α 1 / p [ a , b ] .
Memory effect property: Be α ( 0 , 1 ] and calculate the difference between two valuations 𝒥 0 α f ( x ) in x 1 and x 2 such that x 1 < x 2 :
H = 𝒥 0 α f ( x 2 ) 𝒥 0 α f ( x 1 ) = 1 Γ ( α ) 0 x 2 ( x 2 t ) α 1 f ( t ) d t 1 Γ ( α ) 0 x 1 ( x 1 t ) α 1 f ( t ) d t = 1 Γ ( α ) 0 x 1 ( x 2 t ) α 1 ( x 1 t ) α 1 f ( t ) d t + 1 Γ ( α ) x 1 x 2 ( x 2 t ) α 1 f ( t ) d t .
If α = 1 , the second integral is canceled:
H = 1 Γ ( α ) x 1 x 2 ( x 2 t ) α 1 f ( t ) d t = x 1 x 2 f ( t ) d t .
Therefore, H depends only on what happens in [ x 1 , x 2 ] , and no information about f ( x ) is needed for x i n [ 0 , x 1 ) . This property expresses the locality of the integer-order differential operator.
For the fractional case, the situation is different. In general, the first integral is not zero, and therefore, to evaluate H, it is necessary to know the history of f ( x ) from the initial point 0 to the point x 2 of interest; this is to underline the non-locality of fractional differential and integral operators, which characterizes the memory effect in the process.
Interesting applications of the memory effect of the fractional integrals are the description of the behavior of a crowd of pedestrians, especially to characterize the competitive and cooperative interactions between pedestrians [37], and the generalization of the dynamical model of love/hate [38].

Appendix C. Least-Squares Approaches

There are several different approaches to solve by least squares the over-determined linear system A x = b , where A R n × m :
  • The common solution: x = ( A T A ) 1 A T b ;
  • The QR decomposition: A = Q R , then x = R 1 Q T b , where Q S O ( n ) and R an upper-triangular matrix;
  • The SVD decompositions: A = U Σ V T , then x = A + b = V Σ + U T b , where U S O ( n ) and V S O ( n ) and where Σ + is the pseudo-inverse of Σ , which is formed by replacing every non-zero diagonal entry by its reciprocal and transposing the resulting matrix;
  • The Cholesky decomposition: A T A x = U T U x = A T b , then x = U 1 U T A T b , where U is an upper-triangular matrix.
To reduce the condition number of the matrix to invert, scaling the columns of matrix A:
A S S 1 x = A S S 1 x = E η = b x = S η = S E T E 1 E T b
is highly suggested. In (A9), S is the m × m scaling diagonal matrix, whose elements are the inverse of the norms of the corresponding A matrix columns, s k k = | a k | 1 , or the maximum absolute value, s k k = max i | a k ( i ) | .
The least-squares approach adopted in this article is the scaled QR:
b = A S = Q R x = S R 1 Q T b .

References

  1. Hilfer, R.; Butzer, P.; Westphal, U. An introduction to fractional calculus. In Applications of Fractional Calculus in Physics; World Scientific: Singapore, 2010; pp. 1–85. [Google Scholar]
  2. Gorenflo, R.; Mainardi, F. Fractional calculus: Integral and differential equations of fractional order. arXiv 2008, arXiv:0805.3823. [Google Scholar]
  3. Caponetto, R.; Dongola, G.; Fortuna, L.; Petráš, I. Fractional Order Systems: Modeling and Control Applications; World Scientific: Singapore, 2010; pp. 1–178. [Google Scholar]
  4. Sasso, M.; Palmieri, G.; Amodio, D. Application of fractional derivative models in linear viscoelastic problems. Mech. Time-Depend. Mater. 2011, 15, 367–387. [Google Scholar] [CrossRef]
  5. Mainardi, F. Fractional Calculus and Waves in Linear Viscoelasticity: An Introduction to Mathematical Models; World Scientific Publishing Co. Pte. Ltd.: Hackensack, NJ, USA, 2022; pp. 37+587. [Google Scholar]
  6. Fomin, S.; Chugunov, V.; Hashida, T. Application of fractional differential equations for modeling the anomalous diffusion of contaminant from fracture into porous rock matrix with bordering alteration zone. Transp. Porous Media 2010, 81, 187–205. [Google Scholar] [CrossRef]
  7. Pritz, T. Five-parameter fractional derivative model for polymeric damping materials. J. Sound Vib. 2003, 265, 935–952. [Google Scholar] [CrossRef]
  8. Nualart, D. Stochastic calculus with respect to fractional Brownian motion. Ann. Fac. Sci. Toulouse Math. 2006, 15, 63–78. [Google Scholar] [CrossRef] [Green Version]
  9. Magin, R.L. Fractional calculus in bioengineering: A tool to model complex dynamics. In Proceedings of the 13th International Carpathian Control Conference (ICCC), High Tatras, Slovakia, 28–31 May 2012; pp. 464–469. [Google Scholar] [CrossRef]
  10. Meerschaert, M.M. Fractional calculus, anomalous diffusion, and probability. In Fractional Dynamics: Recent Advances; World Scientific: Singapore, 2012; pp. 265–284. [Google Scholar]
  11. Liu, L.; Zheng, L.; Liu, F.; Zhang, X. Anomalous convection diffusion and wave coupling transport of cells on comb frame with fractional Cattaneo–Christov flux. Commun. Nonlinear Sci. Numer. Simul. 2016, 38, 45–58. [Google Scholar] [CrossRef] [Green Version]
  12. Bagley, R.L. Power law and fractional calculus model of viscoelasticity. AIAA J. 1989, 27, 1412–1417. [Google Scholar] [CrossRef]
  13. Beghin, L.; Garra, R.; Macci, C. Correlated fractional counting processes on a finite-time interval. J. Appl. Probab. 2015, 52, 1045–1061. [Google Scholar] [CrossRef] [Green Version]
  14. Antil, H.; Warma, M. Optimal control of fractional semilinear PDEs. ESAIM Control Optim. Calc. Var. 2020, 26, 30. [Google Scholar] [CrossRef] [Green Version]
  15. Zhao, Z.; Guo, Q.; Li, C. A fractional model for the allometric scaling laws. Open Appl. Math. J. 2008, 2, 26–30. [Google Scholar] [CrossRef]
  16. West, B.J. Fractal physiology and the fractional calculus: A perspective. Front. Physiol. 2010, 1, 12. [Google Scholar] [CrossRef] [Green Version]
  17. Giusti, A. MOND-like fractional Laplacian theory. Phys. Rev. D 2020, 101, 124029. [Google Scholar] [CrossRef]
  18. Tarasov, V.E. Fractional econophysics: Market price dynamics with memory effects. Phys. A: Stat. Mech. Its Appl. 2020, 557, 124865. [Google Scholar] [CrossRef]
  19. Oldham, K.B.; Spanier, J. Theory and applications of differentiation and integration to arbitrary order, With an annotated chronological bibliography by Bertram Ross. In The Fractional Calculus; Academic Press: New York, NY, USA, 1974; Volume 111, p. 13+234. [Google Scholar]
  20. Leake, C.; Johnston, H.; Mortari, D. The Theory of Functional Connections: A Functional Interpolation. Framework with Applications; Lulu: Morrisville, NC, USA, 2022. [Google Scholar]
  21. Garrappa, R. Numerical solution of fractional differential equations: A survey and a software tutorial. Mathematics 2018, 6, 16. [Google Scholar] [CrossRef] [Green Version]
  22. Artin, E. The Gamma Function; Courier Dover Publications: Miniola, NY, USA, 2015. [Google Scholar]
  23. Diethelm, K. The Analysis of Fractional Differential Equations; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  24. Scherer, R.; Kalla, S.L.; Tang, Y.; Huang, J. The Grünwald–Letnikov method for fractional differential equations. Comput. Math. Appl. 2011, 62, 902–917. [Google Scholar] [CrossRef] [Green Version]
  25. Garrappa, R.; Kaslik, E.; Popolizio, M. Evaluation of Fractional Integrals and Derivatives of Elementary Functions: Overview and Tutorial. Mathematics 2019, 7, 407. [Google Scholar] [CrossRef] [Green Version]
  26. Mortari, D. The Theory of Connections: Connecting Points. Mathematics 2017, 5, 57. [Google Scholar] [CrossRef] [Green Version]
  27. Mortari, D. Least-Squares Solution of Linear Differential Equations. Mathematics 2017, 5, 48. [Google Scholar] [CrossRef] [Green Version]
  28. Mortari, D.; Johnston, H.R.; Smith, L. High accuracy least-squares solutions of nonlinear differential equations. J. Comput. Appl. Math. 2019, 352, 293–307. [Google Scholar] [CrossRef] [PubMed]
  29. Leake, C.D. The Multivariate Theory of Functional Connections: An n-Dimensional Constraint Embedding Technique Applied to Partial Differential Equations. Ph.D. Thesis, Texas A&M University, College Station, TX, USA, 2021. [Google Scholar]
  30. Johnston, H.R. The Theory of Functional Connections: A Journey from Theory to Application. Ph.D. Thesis, Texas A&M University, College Station, TX, USA, 2021. [Google Scholar]
  31. Schiassi, E.; Furfaro, R.; Leake, C.D.; Florio, M.D.; Johnston, H.R.; Mortari, D. Extreme theory of functional connections: A fast physics-informed neural network method for solving ordinary and partial differential equations. Neurocomputing 2021, 457, 334–356. [Google Scholar] [CrossRef]
  32. Wang, Y.; Topputo, F. A TFC-based homotopy continuation algorithm with application to dynamics and control problems. J. Comput. Appl. Math. 2022, 401, 113777. [Google Scholar] [CrossRef]
  33. Yassopoulos, C.; Leake, C.D.; Reddy, J.; Mortari, D. Analysis of Timoshenko–Ehrenfest beam problems using the Theory of Functional Connections. Eng. Anal. Bound. Elem. 2021, 132, 271–280. [Google Scholar] [CrossRef]
  34. Johnston, H.R.; Schiassi, E.; Furfaro, R.; Mortari, D. Fuel-Efficient Powered Descent Guidance on Large Planetary Bodies via Theory of Functional Connections. J. Astronaut. Sci. 2020, 67, 1521–1552. [Google Scholar] [CrossRef]
  35. Wolfram, D.A. Change of Basis between Classical Orthogonal Polynomials. arXiv 2021, arXiv:2108.13631. [Google Scholar]
  36. Dorrah, A.; Sutrisno, A.; Desfan Hafifullah, D.; Saidi, S. The Use of Fractional Integral and Fractional Derivative “α=5/2” in the 5-th Order Function and Exponential Function using the Riemann–Liouville Method. Appl. Math. 2021, 11, 23–27. [Google Scholar]
  37. Cao, K.; Chen, Y.; Stuart, D. A fractional micro-macro model for crowds of pedestrians based on fractional mean field games. IEEE/CAA J. Autom. Sin. 2016, 3, 261–270. [Google Scholar] [CrossRef] [Green Version]
  38. Ahmad, W.M.; El-Khazali, R. Fractional-order dynamical models of love. Chaos Solitons Fractals 2007, 33, 1367–1375. [Google Scholar] [CrossRef]
Figure 1. Functional f ( x , g ( x ) ) (left plot) and its RL derivative D 0 α f ( x , g ( x ) ) (right plot) for some choices of g ( x ) . Here, α = 0.8 , x 0 = 1 , f 0 = 1 and p = 2 .
Figure 1. Functional f ( x , g ( x ) ) (left plot) and its RL derivative D 0 α f ( x , g ( x ) ) (right plot) for some choices of g ( x ) . Here, α = 0.8 , x 0 = 1 , f 0 = 1 and p = 2 .
Mathematics 11 01721 g001
Figure 2. Function g ( t ) and errors of truncated approximations g m ( t ) (left plot). RL derivatives D 0 α g m ( x ) for m = 12 (right plot).
Figure 2. Function g ( t ) and errors of truncated approximations g m ( t ) (left plot). RL derivatives D 0 α g m ( x ) for m = 12 (right plot).
Mathematics 11 01721 g002
Figure 3. Residual of the constraints C 1 (left plot) and C 2 (right plot) when applied to the functional f ( x , g ( x ) ) given by (22).
Figure 3. Residual of the constraints C 1 (left plot) and C 2 (right plot) when applied to the functional f ( x , g ( x ) ) given by (22).
Mathematics 11 01721 g003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mortari, D.; Garrappa, R.; Nicolò, L. Theory of Functional Connections Extended to Fractional Operators. Mathematics 2023, 11, 1721. https://doi.org/10.3390/math11071721

AMA Style

Mortari D, Garrappa R, Nicolò L. Theory of Functional Connections Extended to Fractional Operators. Mathematics. 2023; 11(7):1721. https://doi.org/10.3390/math11071721

Chicago/Turabian Style

Mortari, Daniele, Roberto Garrappa, and Luigi Nicolò. 2023. "Theory of Functional Connections Extended to Fractional Operators" Mathematics 11, no. 7: 1721. https://doi.org/10.3390/math11071721

APA Style

Mortari, D., Garrappa, R., & Nicolò, L. (2023). Theory of Functional Connections Extended to Fractional Operators. Mathematics, 11(7), 1721. https://doi.org/10.3390/math11071721

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop