Next Article in Journal
Fixed Point Theory for Multi-Valued Feng–Liu–Subrahmanyan Contractions
Next Article in Special Issue
Geometric Properties of Some Generalized Mathieu Power Series inside the Unit Disk
Previous Article in Journal
An Improved Elephant Herding Optimization for Energy-Saving Assembly Job Shop Scheduling Problem with Transportation Times
Previous Article in Special Issue
Some Probabilistic Generalizations of the Cheney–Sharma and Bernstein Approximation Operators
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New First Order Expansion Formula with a Reduced Remainder

by
Joel Chaskalovic
* and
Hessam Jamshidipour
Jean Le Rond d’Alembert, Sorbonne University, 4 Place Jussieu, CEDEX 05, 75252 Paris, France
*
Author to whom correspondence should be addressed.
Axioms 2022, 11(10), 562; https://doi.org/10.3390/axioms11100562
Submission received: 2 September 2022 / Revised: 29 September 2022 / Accepted: 1 October 2022 / Published: 17 October 2022
(This article belongs to the Collection Mathematical Analysis and Applications)

Abstract

:
This paper is devoted to a new first order Taylor-like formula, where the corresponding remainder is strongly reduced in comparison with the usual one, which appears in the classical Taylor’s formula. To derive this new formula, we introduce a linear combination of the first derivative of the concerned function, which is computed at n + 1 equally spaced points between the two points, where the function has to be evaluated. We show that an optimal choice of the weights in the linear combination leads to minimizing the corresponding remainder. Then, we analyze the Lagrange P 1 - interpolation error estimate and the trapezoidal quadrature error, in order to assess the gain of the accuracy we obtain using this new Taylor-like formula.

1. Introduction

Rolle’s theorem, and therefore, Lagrange and Taylor’s theorems, prevent one from precisely determining the error estimate of numerical methods applied to partial differential equations. Basically, this stems from the existence of a non-unique unknown point, which appears in the remainder of Taylor’s expansion, as a heritage of Rolle’s theorem.
This is the reason why, in the context of finite elements, only asymptotic behaviors are generally considered for the error estimates, which strongly depend on the interpolation error (see, for example, [1] or [2]).
Owing to this lack of information, several heuristic approaches have been considered, so as to investigate new possibilities, which rely on a probabilistic approach. Such new possibilities enable one to classify numerical methods in which the associated data are fixed and not asymptotic (for a review, see [2,3]).
However, an unavoidable fact is that Taylor’s formula introduces an unknown point. This leads to the inability to exactly determine the interpolation error and, consequently, the approximation error of a given numerical method. It is thus legitimate to ask if the corresponding errors are bounded by quantities, which are as small as possible.
Here, we focus on the values of the numerical constants that appear in these estimations to minimize them as much as possible.
For example, let us consider the two-dimensional case and the P 1 -Lagrange interpolation error of a given C 2 function, defined on a given triangle.
One can show that the numerical constant, which naturally appears in the corresponding interpolation error estimate [4], is equal to 1 / 2 , as a heritage of the remainder of the first-order Taylor expansion.
Hence, in this paper, we propose a new first-order Taylor-like formula, in which we strongly modify the repartition of the numerical weights between the Taylor polynomial and the corresponding remainder.
To this end, we introduce a sequence of ( n + 1 ) equally spaced points and consider a linear combination of the first derivative at these points. We show that an optimal choice of the coefficients in this linear combination leads to minimizing the corresponding remainder. Indeed, the bound of the absolute value of the new remainder becomes 2 n smaller than the classical one obtained by the standard Taylor formula.
As a consequence, we show that the bounds of the Lagrange P 1 -interpolation error estimate, as well as the bound of the absolute quadrature error of the trapezoidal rule, are two times smaller than the usual ones obtained using the standard Taylor formula, provided we restrict ourselves to the new Taylor-like formula when n = 1 , namely, with two points.
The paper is organized as follows. In Section 2, we present the main result of this paper, the new first-order Taylor-like formula. In Section 3.1, we show the consequences we derived for the approximation error devoted to interpolation and, in Section 3.2, to numerical quadratures. Finally, in Section 4, we provide concluding remarks.

2. The New First-Order, Taylor-like Theorem

Let us first recall the well-known first-order Taylor formula [5] or [6].
Let ( a , b ) R 2 , a < b , and f C 2 ( [ a , b ] ) . Then, there exists ( m 2 , M 2 ) R 2 such that
x [ a , b ] : m 2 f ( x ) M 2 ,
and we have
f ( b ) = f ( a ) + ( b a ) f ( a ) + ( b a ) ϵ a , 1 ( b ) ,
where
lim b a ϵ a , 1 ( b ) = 0 ,
and
( b a ) 2 m 2 ϵ a , 1 ( b ) ( b a ) 2 M 2 .
In order to derive the main result below, we introduce the function ϕ defined by
ϕ : [ 0 , 1 ] R t f ( a + t ( b a ) ) .
Then, we remark that ϕ ( 0 ) = f ( a ) , and ϕ ( 1 ) = f ( b ) . Moreover, the remainder ϵ a , 1 ( b ) in (2) satisfies the following result.
Proposition 1.
The function ϵ a , 1 ( b ) in the remainder (2) can be written as follows:
ϵ a , 1 ( b ) = 0 1 ( 1 t ) ϕ ( t ) d t .
Proof. 
Taylor’s formula with the remainder in the integral form gives, at the first order,
f ( b ) = f ( a ) + ( b a ) f ( a ) + 1 1 ! a b ( b x ) f ( x ) d x ,
and using the substitution x = a + ( b a ) t in the integral of (5), we obtain
f ( b ) = f ( a ) + ( b a ) f ( a ) + ( b a ) 0 1 ( 1 t ) ( b a ) f ( a + ( b a ) t ) d t ,
where
ϕ ( t ) = ( b a ) f ( a + ( b a ) t ) .
Finally,
ϵ a , 1 ( b ) = 0 1 ( 1 t ) ϕ ( t ) d t .
Now, let n N * . We define ϵ a , n + 1 ( b ) by the formula below
f ( b ) = f ( a ) + ( b a ) k = 0 n ω k ( n ) f a + k ( b a ) n + ( b a ) ϵ a , n + 1 ( b ) ,
where the sequence of the real weights ω k ( n ) k [ 0 , n ] will be determined such that the corresponding remainder built on ϵ a , n + 1 ( b ) will be as small as possible.
In other words, we will prove the following result.
Theorem 1.
Let f be a real mapping defined on [ a , b ] , which belongs to C 2 ( [ a , b ] ) , such that x [ a , b ] , < m 2 f ( x ) M 2 < + .
If the weights ω k ( n ) , with ( k = 0 , n ) , satisfy
k = 0 n ω k ( n ) = 1 ,
then we have the following first order expansion
f ( b ) = f ( a ) + ( b a ) f ( b ) + f ( a ) 2 n + 1 n k = 1 n 1 f a + k ( b a ) n + ( b a ) ϵ a , n + 1 ( b ) ,
where
lim b a ϵ a , n + 1 ( b ) = 0 a n d | ϵ a , n + 1 ( b ) | ( b a ) 8 n ( M 2 m 2 ) .
Moreover, this result is optimal, since the weights involved in (7) guarantee that the remainder ϵ a , n + 1 ( b ) is minimum for the set of equally-spaced points x k = a + k ( b a ) n in [ a , b ] , which is considered in the expansion (7).
Remark 1.
Formula (7) can be derived by using the composite trapezoidal quadrature rule (see, for example, [7]) by integrating a given function f on the interval [ a , b ] . However, in this way, the corresponding quadrature error does not trivially appear as the minimum. This is the purpose of Theorem 1.
Remark 2.
To compare the control of ϵ a , n + 1 ( b ) given by (8) and those of ϵ a , 1 ( b ) given by (3), we remark that (8) implies that
| ϵ a , n + 1 ( b ) | ( b a ) 4 n max ( | m 2 | , | M 2 | ) .
Consequently, the bound of the absolute value of the remainder ϵ a , n + 1 ( b ) is 2 n smaller than those derived with ϵ a , 1 ( b ) .
Remark 3.
We also notice in Theorem 1 that between the parentheses lies a Riemann sum where, if n tends to infinity, we obtain the classical formula of integral calculus. That is to say,
f ( b ) = f ( a ) + ( b a ) 0 1 f ( b t + ( 1 t ) a ) d t .
In order to prove Theorem 1, we will need the following lemma.
Lemma 1.
Let u be any continuous function on R , and a sequence of real numbers ( a k ) 0 k n R n + 1 , ( n N * ) . Thus, we have the following formula:
k = 0 n 1 k n a k u ( t ) d t = k = 0 n 1 k k + 1 S k u ( t ) d t ,
where
S k = j = 0 k a j .
Proof. 
We set A n = k = 0 n 1 k n a k u ( t ) d t , and B n = k = 0 n 1 k k + 1 S k u ( t ) d t , where S k = j = 0 k a j .
We will prove, by induction on n, that A n = B n for all n N * .
If n = 1 , we have
A 1 = k = 0 0 k 1 a k u ( t ) d t = 0 1 a 0 u ( t ) d t ,
and
B 1 = k = 0 0 k k + 1 S k u ( t ) d t = 0 1 S 0 u ( t ) d t = 0 1 a 0 u ( t ) d t .
So, A 1 = B 1 .
Let us now assume that A n = B n , and let us show that A n + 1 = B n + 1 .
We have
B n + 1 = k = 0 n k k + 1 S k u ( t ) d t = k = 0 n 1 k k + 1 S k u ( t ) d t + n n + 1 S n u ( t ) d t = B n + n n + 1 S n u ( t ) d t , = A n + n n + 1 S n u ( t ) d t = k = 0 n 1 k n a k u ( t ) d t + n n + 1 S n u ( t ) d t , = k = 0 n k n a k u ( t ) d t + n n + 1 j = 0 n a j u ( t ) d t = k = 0 n k n a k u ( t ) d t + k = 0 n n n + 1 a k u ( t ) d t , = k = 0 n k n a k u ( t ) d t + n n + 1 a k u ( t ) d t = k = 0 n k n + 1 a k u ( t ) d t = A n + 1 .
We conclude:
n N * : k = 0 n 1 k n a k u ( t ) d t = k = 0 n 1 k k + 1 S k u ( t ) d t ,
where
S k = j = 0 k a j .
Let us now prove Theorem 1.
Proof. 
We have
f ( b ) f ( a ) b a = ϕ ( 0 ) + ϵ a , 1 ( b ) = k = 0 n ω k ( n ) ϕ k n + ϵ a , n + 1 ( b ) ,
which can be re-written as
ϵ a , n + 1 ( b ) = ϕ ( 0 ) + ϵ a , 1 ( b ) k = 0 n ω k ( n ) ϕ k n , = ϕ ( 0 ) + 0 1 ( 1 t ) ϕ ( t ) d t k = 0 n ω k ( n ) ϕ k n , = ϕ ( 1 ) 0 1 t ϕ ( t ) d t k = 0 n ω k ( n ) ϕ k n , = ϕ ( 1 ) 0 1 t ϕ ( t ) d t + k = 0 n ω k ( n ) ϕ ( 1 ) ϕ k n k = 0 n ω k ( n ) ϕ ( 1 ) , = 1 k = 0 n ω k ( n ) ϕ ( 1 ) 0 1 t ϕ ( t ) d t + k = 0 n ω k ( n ) ϕ ( 1 ) ϕ k n .
However, given that
k = 0 n ω k ( n ) = 1 ,
Equation (12) becomes
ϵ a , n + 1 ( b ) = 0 1 t ϕ ( t ) d t + k = 0 n ω k ( n ) ϕ ( 1 ) ϕ k n , = 0 1 t ϕ ( t ) d t + k = 0 n ω k ( n ) k n 1 ϕ ( t ) d t , = k = 0 n 1 k n k + 1 n t ϕ ( t ) d t + k = 0 n ω k ( n ) k n 1 ϕ ( t ) d t , = k = 0 n 1 k n k + 1 n t ϕ ( t ) d t + k = 0 n 1 k n 1 ω k ( n ) ϕ ( t ) d t .
Let us now use Lemma 1 in (14) by setting in (10),
u ( t ) = ϕ t n , a k = ω k ( n ) , and S k = j = 0 k ω j ( n ) = S k ( n ) .
Thus, we obtain, using (10),
k = 0 n 1 k n ω k ( n ) ϕ t n d t = k = 0 n 1 k k + 1 S k ( n ) ϕ t n d t ,
which can be written, by a simple substitution, as
k = 0 n 1 k n 1 ω k ( n ) ϕ ( t ) d t = k = 0 n 1 k n k + 1 n S k ( n ) ϕ ( t ) d t .
Then, (14) gives
ϵ a , n + 1 ( b ) = k = 0 n 1 k n k + 1 n t ϕ ( t ) d t + k = 0 n 1 k n k + 1 n S k ( n ) ϕ ( t ) d t = k = 0 n 1 k n k + 1 n ( S k ( n ) t ) ϕ ( t ) d t .
Moreover,
x [ 0 , 1 ] : ϕ ( x ) = ( b a ) f ( a + x ( b a ) ) , a n d , t [ a , b ] : m 2 f ( t ) M 2 .
Next, to derive a double inequality on ϵ a , n + 1 ( b ) , we split the last integral in (16) as follows:
k n k + 1 n ( S k ( n ) t ) ϕ ( t ) d t = k n S k ( n ) ( S k ( n ) t ) ϕ ( t ) d t + S k ( n ) k + 1 n ( S k ( n ) t ) ϕ ( t ) d t .
Then, considering the constant sign of ( S k ( n ) t ) on k n , S k ( n ) , and on S k ( n ) , k + 1 n , we have
( b a ) m 2 k n S k ( n ) ( S k ( n ) t ) d t k n S k ( n ) ( S k ( n ) t ) ϕ ( t ) d t ( b a ) M 2 k n S k ( n ) ( S k ( n ) t ) d t ,
and
( b a ) M 2 S k ( n ) k + 1 n ( S k ( n ) t ) d t S k ( n ) k + 1 n ( S k ( n ) t ) ϕ ( t ) d t ( b a ) m 2 S k ( n ) k + 1 n ( S k ( n ) t ) d t .
Thus, (18) and (19) enable us to obtain the following two inequalities
k n k + 1 n ( S k ( n ) t ) ϕ ( t ) d t ( b a ) M 2 k n S k ( n ) ( S k ( n ) t ) d t + ( b a ) m 2 S k ( n ) k + 1 n ( S k ( n ) t ) d t ,
and,
k n k + 1 n ( S k ( n ) t ) ϕ ( t ) d t ( b a ) m 2 k n S k ( n ) ( S k ( n ) t ) d t + ( b a ) M 2 S k ( n ) k + 1 n ( S k ( n ) t ) d t .
Since we also have the following results
k n S k ( n ) ( S k ( n ) t ) d t = λ 2 2 n 2 a n d S k ( n ) k + 1 n ( S k ( n ) t ) d t = ( λ 1 ) 2 2 n 2 ,
where we set
λ n S k ( n ) k ,
inequalities (20) and (21) lead to
( b a ) 2 n 2 P 1 ( λ ) k n k + 1 n ( S k ( n ) t ) ϕ ( t ) d t ( b a ) 2 n 2 P 2 ( λ ) ,
where we defined the two polynomials P 1 ( λ ) and P 2 ( λ ) by
P 1 ( λ ) m 2 λ 2 ( λ 1 ) 2 M 2 and P 2 ( λ ) M 2 λ 2 ( λ 1 ) 2 m 2 .
Keeping in mind that we want to minimize ϵ a , n + 1 ( b ) , let us determine the value of λ such that the polynomial P ( λ ) P 2 ( λ ) P 1 ( λ ) is minimum.
To this end, let us remark that P ( λ ) = ( M 2 m 2 ) ( 2 λ 2 2 λ + 1 ) is minimum when λ = 1 2 . Then, for this value of λ , (24) becomes
( b a ) 8 n 2 ( m 2 M 2 ) k n k + 1 n ( S k ( n ) t ) ϕ ( t ) d t ( b a ) 8 n 2 ( M 2 m 2 ) ,
and finally, by summing on k between 0 and n 1 , we have
( b a ) 8 n ( m 2 M 2 ) ϵ a , n + 1 ( b ) ( b a ) 8 n ( M 2 m 2 ) .
Due to the definitions (11) of S k ( n ) and (23) of λ , on the one hand, and because, on the other hand, the weights ω k ( n ) , with ( k = 0 , n ) , satisfy (13), we have
n N * , k [ 0 , n [ : S k ( n ) = j = 0 k ω j ( n ) = 1 2 n + k n .
So, for k = 0 , (28) gives ω 0 ( n ) = 1 2 n , and for k = 1 ,
ω 0 ( n ) + ω 1 ( n ) = 1 2 n + 1 n ,
which implies that ω 1 ( n ) = 1 n .
Then, step by step, the corresponding ω k ( n ) weights are equal to
ω 0 ( n ) = ω n ( n ) = 1 2 n , and ω k ( n ) = 1 n , 0 < k < n ,
which completes the proof for Theorem 1. □
As an example, let us write Formula (7) when n = 2 (i.e., with three points). In this case, we have
f ( b ) = f ( a ) + ( b a ) f ( a ) + 2 f a + b 2 + f ( b ) 4 + ( b a ) ϵ a , 3 ( b ) ,
where
( b a ) 16 ( m 2 M 2 ) ϵ a , 3 ( b ) ( b a ) 16 ( M 2 m 2 ) .
For example, if we set a = 0 , b = 1 and f ( x ) = ln ( 1 + x ) , Formula (29) gives
ln ( 2 ) = ln ( 1 ) + 1 4 1 + 2 . 2 3 + 1 2 + ϵ = 17 24 + ϵ ,
and
| ϵ | 1 8 .
With the same data, the first Taylor’s formula leads to
ln ( 2 ) = ln ( 1 ) + 1 1 + 0 + ϵ = 1 + ϵ ,
and
| ϵ | 1 2 .
So, we notice that formula (29) leads to an accurate approximation of ln ( 2 ) , since (30) implies that
ln ( 2 ) 17 24 0.708
while the well-known approximation of ln ( 2 ) is given by ln ( 2 ) = 0.693 .
Remark 4.
Condition (13) on the weights ω k ( n ) , with ( k = 0 , n ) , in Theorem 1 is a kind of closure condition, since it helps determine w n ( n ) , but it is not a restrictive one.
Indeed, without the closure condition (13), one would have to consider the following expression of ϵ a , n + 1 ( b ) instead of (16)
ϵ a , n + 1 ( b ) = k = 0 n 1 k n k + 1 n ( S k ( n ) t ) ϕ ( t ) d t + 1 k = 0 n ω k ( n ) ϕ ( 1 ) .
Then, (27) would be replaced by
( b a ) 8 n ( m 2 M 2 ) M 1 2 n ϵ a , n + 1 ( b ) ( b a ) 8 n ( M 2 m 2 ) m 1 2 n ,
where we assume that ( m 1 , M 1 ) R 2 are such that x [ a , b ] , < m 1 f ( x ) M 1 < + .
Moreover, to obtain (32), we also used the fact that the weights ω k ( n ) , with ( k = 0 , n ) , may be found with the help of (28) without using the closure condition (13).
More precisely, in this case, one can find that the weights ω k ( n ) , with ( k = 0 , n ) , are equal to
ω 0 ( n ) = 1 2 n , a n d ω k ( n ) = 1 n , 0 < k n .
Consequently, from (32), we obtain that the bound of the absolute value of the remainder ϵ a , n + 1 ( b ) is n times smaller that those of the first-order Taylor formula given by (2) and (3).
So, by considering the closure condition (13) and the corresponding weights ω k ( n ) , with ( k = 0 , n ) , we slightly improved the result of (32), since the bound of the absolute value of the remainder given by (16) is 2 n smaller than those of the first Taylor formula.
Finally, we also observe that formula (7) can be directly obtained from the Composite Trapezoidal Rule applied to a b f ( x ) d x taking n subintervals. In addition, we show that the corresponding remainder is minimized.

3. Application to the Approximation Error

To give added value to Theorem 1, which was presented in the previous section, this section is devoted to appreciating the resulting differences one can observe in two main applications, which belong to the field of numerical analysis. The first one concerns the Lagrange polynomial interpolation and the second, the numerical quadrature. In these two cases, we will evaluate the corresponding approximation error both with the help of the standard first-order Taylor formula and using the generalized Formula (7) derived in Theorem 1.

3.1. The Interpolation Error

In this subsection, we consider the first application of the generalized Taylor-like expansion (7) when n = 1 . In this case, for any function f, which belongs to C 2 ( [ a , b ] ) , Formula (7) can be written
f ( b ) = f ( a ) + ( b a ) f ( a ) + f ( b ) 2 + ( b a ) ϵ a , 2 ( b ) ,
where ϵ a , 2 ( b ) satisfies
( b a ) 8 ( m 2 M 2 ) ϵ a , 2 ( b ) ( b a ) 8 ( M 2 m 2 ) .
As a first application of Formulas (34) and (35), we will consider the particular case of the P 1 -Lagrange interpolation (see [8] or [9]), which consists in interpolating a given function f on [ a , b ] by a polynomial Π [ a , b ] ( f ) of degree less than or equal to one.
Then, the corresponding polynomial of interpolation Π [ a , b ] ( f ) is given by
x [ a , b ] : Π [ a , b ] ( f ) ( x ) = x b a b f ( a ) + x a b a f ( b ) .
One can remark that, using (36), we have Π [ a , b ] ( f ) ( a ) = f ( a ) , and Π [ a , b ] ( f ) ( b ) = f ( b ) .
Our purpose now is to investigate the consequences of Formula (34) when one uses it to evaluate the error of interpolation e ( . ) , defined by
x [ a , b ] : e ( x ) = Π [ a , b ] ( f ) ( x ) f ( x ) ,
and to compare it with the classical first-order Taylor formula given by (2).
The standard results [7] regarding the P 1 Lagrange interpolation error claim that for any function f, which belongs to C 2 [ a , b ] , we have
| e ( x ) | ( b a ) 2 2 sup a x b | f ( x ) | .
This result is usually derived by considering the suitable function g ( t ) defined on [ a , b ] by
g ( t ) = f ( t ) Π [ a , b ] ( f ) ( t ) f ( t ) Π [ a , b ] ( f ) ( t ) ( t a ) ( t b ) ( x a ) ( x b ) , ( x ] a , b [ ) .
Given that g ( a ) = g ( b ) = g ( x ) = 0 , and by applying Rolle’s theorem twice, one can deduce that there exists ξ x ] a , b [ such that g ( ξ x ) = 0 .
Therefore, after some calculations, one obtains the following
f ( x ) Π [ a , b ] ( f ) ( x ) = 1 2 ( x a ) ( x b ) f ( ξ x ) , ( a < ξ x < b ) ,
and (37) simply follows.
Still, as one can see from (39), estimation (37) can be improved since
sup a x b ( x a ) ( b x ) = ( b a ) 2 4 .
Then, (39) leads to
| e ( x ) | ( b a ) 2 8 sup a x b | f ( x ) | ,
in the place of (37).
However, to appreciate the difference between the classical Taylor formula and the new one in (34), we will now reformulate the proof of (41) by using the classical Taylor Formula (2). This is the purpose of the following lemma.
Lemma 2.
Let f be a function, which belongs to C 2 ( [ a , b ] ) , satisfying (1); then, the first-order Taylor theorem leads to the following interpolation error estimate
| e ( x ) | ( b a ) 2 8 M ,
where M = max { | m 2 | , | M 2 | } .
Proof. 
We begin by writing the Lagrange P 1 polynomial Π [ a , b ] ( f ) given by (36) with the help of the classical first-order Taylor Formula (2).
Indeed, in (36), we substitute f ( a ) and f ( b ) by
f ( a ) = f ( x ) + ( a x ) f ( x ) + ( a x ) ϵ x , 1 ( a ) , ( x [ a , b ] ) ,
f ( b ) = f ( x ) + ( b x ) f ( x ) + ( b x ) ϵ x , 1 ( b ) , ( x [ a , b ] ) ,
where, by the help of (3) and (1), ϵ x , 1 ( a ) and ϵ x , 1 ( b ) satisfy
| ϵ x , 1 ( a ) | ( x a ) 2 M a n d | ϵ x , 1 ( b ) | ( b x ) 2 M .
Then, (36) gives
Π [ a , b ] ( f ) ( x ) = f ( x ) + ( x a ) ( b x ) ( b a ) ϵ x , 1 ( b ) ϵ x , 1 ( a ) ,
and due to (45), we obtain
| Π [ a , b ] ( f ) ( x ) f ( x ) | ( x a ) ( b x ) 2 M ,
where we used the fact that | ϵ x , 1 ( b ) ϵ x , 1 ( a ) | ( b a ) 2 M .
Finally, due to (40), (47) leads to (42). □
Let us now derive the corresponding result when one uses the new first-order Taylor-like Formula (34) in the expression of the interpolation polynomial Π [ a , b ] ( f ) defined by (36).
This is the purpose of the following lemma.
Lemma 3.
Let f C 2 ( [ a , b ] ) ; then, we have the following interpolation error estimate, for all x [ a , b ] :
f ( x ) Π [ a , b ] ( f ) ( x ) f ( b ) f ( a ) 2 ( b a ) ( b x ) ( x a ) ( b a ) 2 32 ( M 2 m 2 ) .
Proof. 
We begin by writing f ( a ) and f ( b ) by the help of (34)
f ( a ) = f ( x ) + ( a x ) f ( x ) + f ( a ) 2 + ( a x ) ϵ x , 2 ( a ) ,
f ( b ) = f ( x ) + ( b x ) f ( x ) + f ( b ) 2 + ( b x ) ϵ x , 2 ( b ) ,
where ϵ x , 2 satisfies (35), with obvious changes in notations. Namely, we have
| ϵ x , 2 ( a ) | ( x a ) 8 ( M 2 m 2 ) and | ϵ x , 2 ( b ) | ( b x ) 8 ( M 2 m 2 ) .
Then, by substituting f ( a ) and f ( b ) in the interpolation polynomial given by (36), we have
Π [ a , b ] ( f ) ( x ) = f ( x ) + f ( b ) f ( a ) 2 ( b a ) ( b x ) ( x a ) + ( b x ) ( x a ) ( b a ) ϵ x , 2 ( b ) ϵ x , 2 ( a ) .
Now, if we define the refined interpolation polynomial Π [ a , b ] * ( f ) by
x [ a , b ] : Π [ a , b ] * ( f ) ( x ) = Π [ a , b ] ( f ) ( x ) f ( b ) f ( a ) 2 ( b a ) ( b x ) ( x a ) ,
Equation (52) becomes
Π [ a , b ] * ( f ) ( x ) = f ( x ) + ( b x ) ( x a ) ( b a ) ϵ x , 2 ( b ) ϵ x , 2 ( a ) .
Thus, due to (51), we have | ϵ x , 2 ( a ) ϵ x , 2 ( b ) | ( b a ) 8 ( M 2 m 2 ) , and (52) with the help of (53) gives
| Π [ a , b ] * ( f ) ( x ) f ( x ) | ( b x ) ( x a ) 8 ( M 2 m 2 ) ( b a ) 2 32 ( M 2 m 2 ) ,
which completes the proof of this lemma. □
Let us now formulate a couple of the consequences of Lemmas 2 and 3.
1.
If we consider the refined interpolation polynomial Π [ a , b ] * ( f ) defined by (53), we obtain an accuracy for the error estimate (48), which is two times more precise than what we obtained in (42) using the classical Taylor formula.
  • In order to compare (42) and (48), we notice that (48) leads to
    x [ a , b ] : | e * ( x ) | ( b a ) 2 16 max ( | m 2 | , | M 2 | ) .
  • Now, the cost for this improvement is that Π [ a , b ] * ( f ) is a polynomial of degree less than or equal to two, which requires the computation of f ( a ) and f ( b ) . However, the consequent gain clearly appears in the following application devoted to finite elements.
  • To this end, we consider a Hilbert space V endowed with a norm . V and a bilinear, continuous, and V elliptic form a ( · , · ) defined on V × V .
  • In particular, ( α , C ) R + * × R + * such that:
    v V , α v V 2 a ( v , v ) C v V 2 .
  • Moreover, we denote by l ( · ) a linear continuous form defined on V.
  • So, let u V be the unique solution to the second order elliptic variational formulation (VP) defined by:
    ( VP ) Find u V   solution   to : a ( u , v ) = l ( v ) , v V .
  • Let us also introduce the approximation u h of u, the solution to the approximate variational formulation (VP) h defined by:
    ( VP ) h Find u h V h   solution   to : a ( u h , v h ) = L ( v h ) , v h V h ,
    where V h is a given linear subspace of V, whose dimension is finite.
  • Then, we are in position to recall Céa’s Lemma, which can be found in [1], for example:
Lemma 4.
Let u be solution to (58) and u h be the solution of (59). Then, the following inequality holds:
u u h V C α inf v h V h u v h V ,
where the constant C and α are the continuity constant and the ellipticity constant, respectively, of the bilinear form a ( · , · ) defined in (57).
So, due to Céa’s lemma, (60) leads to
u u h V C α u Π h ( u ) V ,
for any interpolate polynomial Π h ( u ) in V h of the function u. Thus, inequality (61) shows that the approximation error is bounded by the interpolation error.
Therefore, if one wants to locally guarantee that the upper bound of the interpolation error is not greater than a given threshold ϵ , then if h denotes the local mesh size of a given mesh, by setting h = b a , inequalities (42) and (56) lead to
M 8 h 2 ϵ   and   M 16 h 2 ϵ .
It follows that the difference between the interpolation based on Π [ a , b ] ( f ) or Π [ a , b ] * ( f ) is 1 / 2 0.707 , and consequently, the gain is around 30 percent with the refined interpolation polynomial Π [ a , b ] * ( f ) . This economy in terms of the total number of meshes would be even more significant if one considers the extension of this case to a three-dimensional application.
2.
We also notice that, if we now consider the particular class of C 2 functions f defined on R , ( b a ) -periodic, then f ( a ) = f ( b ) , and consequently, the interpolation error e * ( x ) is equal to e ( x ) , and (48) becomes
x [ a , b ] : | e * ( x ) | = | e ( x ) | ( b a ) 2 32 ( M 2 m 2 ) ( b a ) 2 16 M .
  • In other words, for this class of periodic functions, due to the new first order Taylor-like formula (34), the interpolation error e ( x ) provided by (63) is bound by a quantity that is two times smaller that those we obtained in (42) using the classical Taylor formula.
  • We highlight that, in this case, there is no cost anymore to obtain this more accurate result, since it concerns the standard interpolation error associated with the standard Lagrange P 1 -polynomial.
3.
Finally, since the refined polynomial Π [ a , b ] * ( f ) has a degree less than or equal to two, one would want to compare it with the performance of the corresponding Lagrange polynomial with the same degree.
  • In order to process it, we must assume that f belongs to C 3 ( [ a , b ] ) ; then, in [7], we find that the interpolation error e T ( x ) for a Lagrange polynomial whose degree is less than or equal to two is given by
    x [ a , b ] : | e T ( x ) | ( b a ) 3 24 M 3 ,
    where M 3 = sup a x b | f ( x ) | .
  • Consequently, by comparing (64) and (56), provided the given function f is sufficiently smooth, (namely in C 3 ( [ a , b ] ) ), one would prefer to use the Lagrange polynomial of a degree less than or equal to two, which leads to a more accurate interpolation error.
  • However, for a function f, which only belongs to C 2 ( [ a , b ] ) , no result is available for this Lagrange polynomial, and the comparison is not valid anymore.

3.2. The Quadrature Error

We now consider, for any integrable function f defined on [ a , b ] , the famous trapezoidal quadrature [7] or [10], the formula of which is given by
a b f ( x ) d x b a 2 f ( a ) + f ( b ) .
We consider (65) due to the fact that this quadrature formula corresponds to approximating the function f by its Lagrange polynomial interpolation Π [ a , b ] ( f ) , of degree less than or equal to one, which is given by (36).
In the literature on numerical integration, (see, for example, [7] and [11] or [12]), the following estimation is well known as the trapezoid inequality
a b f ( x ) d x ( b a ) f ( a ) + f ( b ) 2 ( b a ) 3 12 sup a x b | f ( x ) | ,
for any function f twice differentiable on [ a , b ] , the second derivative of which is accordingly bounded on [ a , b ] .
It is also well known [13] that if f is only C 1 on [ a , b ] , one has the following estimation
a b f ( x ) d x ( b a ) f ( a ) + f ( b ) 2 ( b a ) 2 8 ( M 1 m 1 ) ,
where x [ a , b ] : < m 1 f ( x ) M 1 < + .
Now, we prove a lemma that will propose estimation (66) in an alternate display. It will also extend estimation (67) to twice differentiable functions f that satisfy (1).
Lemma 5.
Let f be a twice differentiable mapping on [ a , b ] , which satisfies (1).
Then, we have the following estimation
a b f ( x ) d x ( b a ) f ( a ) + f ( b ) 2 ( b a ) 3 24 ( M 2 m 2 ) .
Proof. 
In order to derive estimation (68), we recall that the classical first-order Taylor Formula (2) enables us to write the polynomial Π [ a , b ] ( f ) by (46). Then, by integrating (46) between a and b, we obtain
a b f ( x ) d x Π [ a , b ] ( f ) ( x ) d x = a b ( x a ) ( b x ) b a ( ϵ x , 1 ( a ) ϵ x , 1 ( b ) ) d x .
However, one can easily show that the P 1 -Lagrange interpolation polynomial Π [ a , b ] ( f ) given by (36) also fulfills
a b Π [ a , b ] ( f ) ( x ) d x = ( b a ) 2 f ( a ) + f ( b ) .
Now, if we introduce the well-known quantity E ( f ) , which is called the quadrature error, defined by
E ( f ) a b f ( x ) d x ( b a ) 2 ( f ( a ) + f ( b ) ) ,
Equations (69) and (70) lead to the two following inequalities
E ( f ) 1 2 ( b a ) a b ( x a ) ( b x ) m 2 ( x a ) M 2 ( b x ) d x ,
and
E ( f ) 1 2 ( b a ) a b ( x a ) ( b x ) M 2 ( x a ) m 2 ( b x ) d x ,
where we used inequality (3) for ϵ x , 1 ( a ) and ϵ x , 1 ( b ) , with obvious adaptations.
One can now observe that in (72) and (73), the two integrals I and J defined by
I = a b ( b x ) ( x a ) 2 d x and   J = a b ( x a ) ( b x ) 2 d x
can be computed as follows.
Let us consider in (74) the substitution x = a + b 2 + ( b a ) 2 t ; then, we obtain
I = b a 2 4 1 1 ( 1 t ) ( 1 + t ) 2 d x = ( b a ) 4 12 ,
and
J = b a 2 4 1 1 ( 1 + t ) ( 1 t ) 2 d x = ( b a ) 4 12 .
Finally, to obtain an upper bound for | E ( f ) | , owing to (72), (73), (75), and (76), we obtain
| E ( f ) | = a b f ( x ) d x ( b a ) 2 ( f ( a ) + f ( b ) ) ( b a ) 3 24 ( M 2 m 2 ) .
Now, we consider the expression of the polynomial interpolation Π [ a , b ] ( f ) ( x ) to transform it with the help of the new first-order Taylor-like Formula (34). This will enable us to obtain the following lemma devoted to the corrected trapezoid formula according to Atkinson’s terminology [14].
Lemma 6.
Let f be a twice differentiable mapping on [ a , b ] , which satisfies (1).
Then, we have the following corrected trapezoidal estimation
a b f ( x ) d x ( b a ) f ( a ) + f ( b ) 2 + ( b a ) 2 ( f ( b ) f ( a ) ) 12 ( b a ) 3 48 ( M 2 m 2 ) .
Proof. 
We consider the expression we obtained in (52) for the polynomial interpolation Π [ a , b ] ( f ) ( x ) , and we integrate it between a and b to obtain
a b f ( x ) Π [ a , b ] ( f ) ( x ) d x + ( b a ) 2 ( f ( b ) f ( a ) ) 12 = a b ( b x ) ( x a ) ( ϵ x , 2 ( a ) ϵ x , 2 ( b ) ) b a d x ,
where we used the following result obtained by the same substitution that we used to compute the integrals in (74)
a b ( b x ) ( x a ) d x = ( b a ) 3 6 .
Then, due to (51), we also have the following inequality
| ϵ x , 2 ( a ) ϵ x , 2 ( b ) | ( b a ) 8 ( M 2 m 2 ) ,
and (79) directly gives the result (78) to be proved. □
We conclude this section with several remarks.
1.
We observe that the quadrature error we derived in (78) by the new first-order Taylor-like formula is bounded two times less than those we derived in (68) with the help of the classical Taylor’s formula.
Furthermore, Cheng and Sun proved in [15] that the best constant one can expect in (78) is equal to 1 / 36 3 1 / 62.35 , which is slightly smaller than the 1 / 48 we found in (79).
2.
If we consider the particular class of C 2 functions f defined on R , ( b a ) -periodic, f ( a ) = f ( b ) , and the corrected trapezoid Formula (78) becomes the classical one
a b f ( x ) d x ( b a ) f ( a ) + f ( b ) 2 ( b a ) 3 48 ( M 2 m 2 ) .
In other words, we find that for this class of periodic functions, the quadrature error of the classical trapezoid formula is two times more accurate than those we found in (77), where the classical first-order Taylor formula was implemented.

4. Conclusions and Perspectives

In this paper, we derived a new first-order Taylor-like formula to minimize the unknown remainder, which appears in the classical one. This new formula was composed of a linear combination of the first derivative of a given function, computed at ( n + 1 ) equally spaced points on [ a , b ] .
We also showed that the corresponding new remainder could be minimized using a suitable choice of the set of the weights that appear in the linear combination of the first derivative values at the corresponding points.
As a consequence, the bound of the absolute value of the new remainder was 2 n smaller than the one that appeared in the classical first-order Taylor formula.
Next, we considered two famous applicative contexts given by the numerical analysis where the Taylor formula was used: the interpolation error and the quadrature error. Then, we showed that one can obtain a significant improvement in the corresponding errors. Namely, Lemma 3 and Lemma 6 proved that the upper bound of these errors was two times smaller than the usual ones estimated by the classical Taylor formula, if one limits it to the class of periodic functions.
Several other applications might be considered by this new first-order Taylor-like formula, for example, the approximation error, which has to be considered in ODEs where the Taylor formula is strongly used for the appropriate numerical schemes, or in the context of finite elements.
For this last application, when one considers linear second elliptic PDEs, due to Cea’s Lemma [1], the approximation error was bounded by the interpolation error. Then, the improvement in the interpolation error that we showed in this current work, using the interpolation polynomial defined by (53), in comparison with the standard P 1 -Lagrange Polynomial, will consequently impact the accuracy of the approximation error.
Indeed, we highlighted the corresponding gain one may take into account for building meshes, as soon as a given local threshold of accuracy is fixed for the associated approximations.
Other developments may also be considered, e.g., a generalized high-order Taylor-like formula, on the one hand, or its corresponding extension for functions with several variables, on the other hand.

Author Contributions

Conceptualization, J.C. and H.J.; methodology, J.C. and H.J.; writing—original draft preparation, J.C.; writing—review and editing, J.C. supervision, J.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Acknowledgments

The authors want to warmly dedicate this research to the memory of André Avez and Gérard Tronel, who largely promoted the passion for the research and teaching of mathematics to their students. We express profound appreciation to Franck Assous who proofread the manuscript. A special dedication is also expressed to the memory of Victor Nacasch who was passionate about probability theory.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chaskalovic, J. Mathematical and Numerical Methods for Partial Differential Equations; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  2. Chaskalovic, J. A probabilistic approach for solutions of determinist PDE’s as well as their finite element approximations, 2020. Axioms 2021, 10, 349. [Google Scholar] [CrossRef]
  3. Chaskalovic, J.; Assous, F. Numerical validation of probabilistic laws to evaluate finite element error estimates. Math. Model. Anal. 2021, 26, 684–694. [Google Scholar] [CrossRef]
  4. Assous, F.; Chaskalovic, J. Indeterminate Constants in Numerical Approximations of PDE’s: A Pilot Study Using Data Mining Techniques. J. Comput Appl. Math. 2014, 270, 462–470. [Google Scholar] [CrossRef]
  5. Kline, M. Mathematical Thought from Ancient to Modern Times; Oxford University Press: Oxford, UK, 1972; Volume 2. [Google Scholar]
  6. Brook Taylor. Methodus Incrementorum Directa and Inversa; Innys: London, UK, 1717. [Google Scholar]
  7. Crouzeix, M.; Mignot, A.L. Analyse Numérique des Équations Différentielles, 2nd ed.; Collection Mathématiques Appliquées pour la maîtrise; Masson: Paris, France, 1992. [Google Scholar]
  8. Ciarlet, P.G. The Finite Element Method for Elliptic Problems; Classics in Applied Mathematics; SIAM: Philadelphia, PA, USA, 2002. [Google Scholar]
  9. Raviart, P.A.; Thomas, J.M. Introduction à L’analyse Numérique des Équations aux Dérivées Partielles; Masson: Paris, France, 1982. [Google Scholar]
  10. Dragomir, S.S.; Cerone, P.; Sofo, A. Some remarks on the trapezoid rule in numerical integration. Indian J. Pure Appl. Math. 2000, 31, 475–494. [Google Scholar]
  11. Bhawana, A. Applications of Quadrature Formulae in Numerical Integration and Its Applications; Eliva Press: Chisinau, Moldova, 2022. [Google Scholar]
  12. Cerone, P.; Dragomir, S.S. Trapezoidal-type rules from an inequalities point of view. In Handbook of Analytic-Computational Methods in Applied Mathematics; Anastassiou, G., Ed.; CRC Press: New York, NY, USA, 2000; pp. 65–134. [Google Scholar]
  13. Barnett, N.S.; Dragomir, S.S. Applications of Ostrowski’s version of the Grüss inequality for trapezoid type rules. Tamkang J. Math. 2006, 37, 163–173. [Google Scholar] [CrossRef]
  14. Atkinson, K.E. An Introduction to Numerical Analysis, 2nd ed.; Wiley and Sons: Hoboken, NJ, USA, 1989. [Google Scholar]
  15. Cheng, X.-L.; Sun, J. A note on the perturbed trapezoid inequality. J. Inequal. Pure and Appl. Math. 2002, 3, 29. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chaskalovic, J.; Jamshidipour, H. A New First Order Expansion Formula with a Reduced Remainder. Axioms 2022, 11, 562. https://doi.org/10.3390/axioms11100562

AMA Style

Chaskalovic J, Jamshidipour H. A New First Order Expansion Formula with a Reduced Remainder. Axioms. 2022; 11(10):562. https://doi.org/10.3390/axioms11100562

Chicago/Turabian Style

Chaskalovic, Joel, and Hessam Jamshidipour. 2022. "A New First Order Expansion Formula with a Reduced Remainder" Axioms 11, no. 10: 562. https://doi.org/10.3390/axioms11100562

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop