Next Article in Journal
Fuzzy Ranking Network DEA with General Structure
Previous Article in Journal
An Intervention Based on Identifying Topics That Students Have Difficulties with
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Compensated Evaluation of Tensor Product Surfaces in CAGD

by
Jorge Delgado Gracia
Departamento de Matemática Aplicada, Universidad de Zaragoza, 44003 Teruel, Spain
Mathematics 2020, 8(12), 2219; https://doi.org/10.3390/math8122219
Submission received: 15 November 2020 / Revised: 4 December 2020 / Accepted: 7 December 2020 / Published: 14 December 2020
(This article belongs to the Section Mathematics and Computer Science)

Abstract

:
In computer-aided geometric design, a polynomial surface is usually represented in Bézier form. The usual form of evaluating such a surface is by using an extension of the de Casteljau algorithm. Using error-free transformations, a compensated version of this algorithm is presented, which improves the usual algorithm in terms of accuracy. A forward error analysis illustrating this fact is developed.

1. Introduction

The Horner algorithm is the most usual method for the evaluation of polynomials. Important algorithms in computer-aided geometric design (CAGD) need to compute roots of curves and surfaces. Some of the algorithms, in order to compute those roots, need to evaluate accurately the curves and surfaces at points close to the roots (see [1,2]). These evaluations are ill-conditioned, and accurate evaluation algorithms could play a key role in the performances of some of these root finding algorithms. In the last few years, in the literature it has been shown that the de Casteljau algorithm outperforms Horner’s algorithm, among other evaluation algorithms, from the point of view of accuracy (see [3,4,5,6,7,8]). The de Casteljau algorithm evaluates polynomials represented in Bézier form, that is, using the Bernstein polynomials. In CAGD it is the usual evaluation algorithm for polynomial curves.
In CAGD, polynomials (curves and surfaces) are usually represented in Bernstein form, by using the Bernstein polynomials of degree n. A polynomial in the Bézier form is evaluated by the de Casteljau algorithm in the bivariate case and by an extended version in the multivariate case. The error analysis of these algorithms in [6,7] shows a relative error bound of the following form:
Condition number × O ( u ) ,
where u is the unit roundoff of the computing precision. For an ill-conditioned problem, such as the evaluation of a polynomial at parameters very close to a multiple root, the condition number can exceed 1 / u . In that case we can obtain an approximation of the polynomial at the parameter value with almost all its digits being false.
Error-free transformations (EFTs) have been studied by Rump and Ogita in [9,10,11]. In [12], applying EFTs, Graillat and Langlois presented a compensated version of the usual Horner algorithm to evaluate polynomials represented in the power basis. Later, in [13] a compensated de Casteljau algorithm for the evaluation of univariate polynomialas was devised. The relative error bound for this algorithm has the following form:
u + Condition number × O ( u 2 ) ,
which improves the bound for the usual de Casteljau algorithms in (1).
In [7], an error analysis was performed for the extension of the de Casteljau algorithm for tensor product surfaces in Bernstein-Bézier form. In this paper, applying EFTs, we present a compensated version of this algorithm for the evaluation of those surfaces with improved accuracy.
The layout of the paper is as follows. Section 2 introduces some basic notation and results about error analysis with floating point arithmetic; the EFTs; the de Casteljau algorithm for polynomial curves and its compensated version. Section 3 recalls the extension of the de Casteljau algorithm for the evaluation of tensor product surfaces and the corresponding error analysis. Then, the compensated de Casteljau algorithm for Bézier tensor product surfaces is devised and the corresponding error analysis performed, providing a better bound for the error.

2. Basic Notation and Results

2.1. Floating Point Arithmetic and Forward Error Analysis

Given a real number x, the computed element in floating point arithmetic will be denoted by either f l ( x ) or x ^ . Let us assume that u is the unit roundoff of the arithmetic floating point system we are using. In error analysis, the study of the effect of rounding errors is usually carried out by using one of the following two models.
f l ( a o p b ) = ( a o p b ) ( 1 + δ ) or f l ( a o p b ) = a o p b 1 + δ , | δ | u ,
where o p is any one of the operations + , , × , / (for more details see pages 40–41 of [14]). Now let us define
γ k : = k u 1 k u = k u + O ( u 2 ) ,
where k N 0 verifies k u < 1 . Given δ 1 , , δ k with | δ i | u for all i, in error analysis it is usual to deal with quantities θ k satisfying that i = 1 k ( 1 + δ i ) = ( 1 + θ k ) . In Lemma 3.1 of [14] it was proved that their absolute value is bounded above by γ k , that is, | θ k | γ k . The following result summarizes some classic properties in error analysis (see Lemma 3.3 of [14]).
Lemma 1.
i. 
( 1 + θ k ) ( 1 + θ j ) = 1 + θ k + j ,
ii. 
γ k γ j γ min ( k , j ) for max ( j , k ) u 1 / 2 ,
iii. 
i γ k γ k i ,
iv. 
γ k + u γ k + 1 ,
v. 
γ k + γ j + γ k γ j γ k + j .
Condition numbers of the functions to be evaluated are important for the accuracy of the result. Let us now recall some condition numbers related to the evaluation of functions. Given a space of functions U defined on Θ R s , a basis B = ( b 0 , , b n ) for U and a function f = i = 0 n c i b i U , measures of the sensitivity of f ( x ) to perturbations in c = ( c j ) j = 0 n are important in error analysis of the evaluation algorithms. Thus, given a relative perturbation δ = ( δ i ) i = 0 n of the coefficients c, we obtain the function g = i = 0 n ( 1 + δ i ) c i b i , which is related to f. Then for any x Θ
| f ( x ) g ( x ) | = i = 0 n δ i c i b i ( x ) δ i = 0 n | c i b i ( x ) | .
The number
S B ( f ( x ) ) : = i = 0 n | c i b i ( x ) | ,
plays the role of a condition number for the evaluation of the function f at x using the basis B (see [4,5,15,16,17]).
In CAGD, it is usual that the basis B must be formed of blending functions; that is, each basis function must be nonnegative on Θ , and the sum of all bases functions must be equal to 1 for all point in Θ . If B = ( b 0 , , b n ) is a basis of blending functions and B ¯ = ( k 0 b 0 , , k n b n ) ( k i R i ) then S B ( f ( x ) ) = S B ¯ ( f ( x ) ) .
In floating point arithmetic, given an algorithm for the evaluation of the function f ( x ) , one obtains the computed value f l ( f ( x ) ) or f ( x ) ^ . From a practical point of view, to obtain an error bound or estimate for the approximation of the exact evaluation f ( x ) given by f l ( f ( x ) ) , it is desirable. The success on the accuracy of the obtained aproximation when using an evaluation algorithm despends on:
  • The backward error—that is, the error of the calculations of the algorithm;
  • The difficulty of the evaluated function—that is, the condition number of the function with respect to the basis used as a representation by the evaluation algorithm.
In error analysis, the computed f l ( f ( x ) ) can be expressed as f l ( f ( x ) ) = g ( x ) = i = 0 n ( 1 + δ i ) c i b i ( x ) , where δ = ( δ i ) i = 0 n is a perturbation in c. Thus, the upper bound of the forward error for evaluation in formula (4) is usually interpreted as a product of the backward error δ and the condition number S B ( f ( x ) ) (cf. [14]).

2.2. Error-Free Transformations

Error-free transformations (EFTs) will be taken into account in our algorithms in order to improve accuracy. In particular, TwoSum and TwoProduct EFTs will be used (see [9]) for computing sums and products, respectively. The algorithm TwoSum for the sum was presented by Knuth in [18], whereas the algorithm TwoProduct for the product was presented by Dekker, due to G. W. Veltkamp, in [19]. Algorithms 1 and 2 show these algorithms (TwoSum and TwoProduct), Algorithm 3 is used by Algorithm 2.
Algorithm 1 TwoSum algorithm.
Require: a , b
Ensure: [ x , y ] such that x + y = a + b
x = a b
z = x a
y = ( a ( x z ) ) ( b z )
Algorithm 2 TwoProduct algorithm.
Require: a , b
Ensure: [ x , y ] such that x + y = a · b
 1: x = a b
 2: [ a 1 , a 2 ] = S p l i t ( a )
 3: [ b 1 , b 2 ] = S p l i t ( b )
 4: y = a 2 b 2 ( ( ( x a 1 b 1 ) a 2 b 1 ) a 1 b 2 )
Algorithm 3 Split algorithm.
Require:a
Ensure: [ x , y ] such that x + y = a
 1: c = f a c t o r a % f a c t o r = 2 27 + 1 in IEEE 754
 2: x = c ( c a )
 3: y = a x
Error analyses of both algorithms were presented in Theorem 3.4 of [9] and Théorème 3.14 of [20]. The following result shows a summary of these results.
Theorem 1.
Let F be the set of standard floating point numbers corresponding to a certain floating point arithmetic. If a , b F , then:
i. 
[ x , y ] = T w o S u m ( a , b ) verifies
a + b = x + y , x = a b , | y | u | x | , | y | u | a + b | .
ii. 
[ x , y ] = T w o P r o d u c t ( a , b ) ; if not, underflow occurs,
a · b = x + y , x = a b , | y | u | x | , | y | u | a · b | .

2.3. De Casteljau Algorithm for Polynomial Curves in Bézier Form

The Horner algorithm is the best well-known method for polynomial evaluation. It uses the monomial basis of the space P n , M n : = ( m 0 n ( t ) , m 1 n ( t ) , , m n n ( t ) ) , t [ 0 , 1 ] , given by m i n ( t ) = t i , i = 0 , 1 , , n . Given p ( t ) = i = 0 n c i m i n ( t ) , the error analysis of the Horner algorithm in chapter 5 of [14] shows that
| p ( t ) f l ( p ( t ) ) | γ 2 n i = 0 n | c i | t i = γ 2 n S M n ( p ( t ) ) , for all p P n and t [ 0 , 1 ] .
In CAGD the usual evaluation algorithm for polynomial curves is the de Casteljau algorithm. This algorithm evaluates polynomials represented using the Bernstein basis (see [21]). The Bernstein polynomials of degree n, B n : = ( b 0 n ( t ) , b 1 n ( t ) , , b n n ( t ) ) , t [ 0 , 1 ] , form a basis of P n and are defined by
b i n ( t ) = n i t i ( 1 t ) n i , i = 0 , 1 , , n .
A polynomial
p ( t ) = i = 0 n c i b i n ( t ) P n
is said to be in Bézier form or in Bernstein–/Bézier form. Algorithm 4 shows the de Casteljau algorithm for the evaluation of polynomials in Bézier form (6).
Algorithm 4 De Casteljau algorithm for the evaluation of p P n at t.
Require: t [ 0 , 1 ] and ( c i ) i = 0 n
Ensure: f 0 n ( t ) i = 0 n c i b i n ( t )
for j = 0 to n do
f j 0 ( t ) : = c j
end for
for r = 1 to n do
  for j = 0 to n r do
f j r ( t ) = ( 1 t ) f j r 1 ( t ) t f j + 1 r 1 ( t )
   end for
end for
A corner cutting algorithm is an algorithm such that each step is formed by linear convex combinations (see [6]). The de Casteljau algorithm is a corner cutting algorithm. In [6] an error analysis of corner cutting algorithms was carried out, which for the particular case of the de Casteljau algorithm can be written as
| p ( t ) f l ( p ( t ) ) | γ 2 n i = 0 n | c i | b i n ( t ) = γ 2 n S B n ( p ( t ) ) , for all p P n and t [ 0 , 1 ] .
In addition, the optimal conditioning of Bernstein basis for polynomial evaluation among all the bases formed by nonnegative polynomials on [ 0 , 1 ] was shown in [5]. Thus, there does not exist another basis of P n , up to positive scaling, formed by nonnegative polynomials on [ 0 , 1 ] that is better conditioned for every p P n at every point t [ 0 , 1 ] . In particular, we have S B n ( p ( t ) ) S M n ( p ( t ) ) for all p P n and t [ 0 , 1 ] . Hence, the part of the error bound corresponding to the condition number for the de Casteljau algorithm is lower than the corresponding part of the bound for the Horner algorithm. In fact, the numerical experiments in [3] show that the algorithms using the Bernstein representation, like the de Casteljau algorithm, present better stability properties than the Horner algorithm.

2.4. Compensated Evaluation Algorithms for Bézier Curves

It is usual to apply EFTs (see [9] and Section 2.2) in order to devise compensated evaluation algorithms providing more accurate results. Hence, in [22,23] Graillat, Langlois and Louvet devised a compensated Horner algorithm for the evaluation of a polynomial in monomial form. In Theorem 5 of [22] it was proved that the evaluation of a degree n polynomial with the compensated Horner algorithm provides an approximation f l ( p ( t ) ) verifying
| p ( t ) f l ( p ( t ) ) | u | p ( t ) | + γ 2 n 2 S M n ( p ( t ) ) .
In [13] a compensated de Casteljau algorithm for the evaluation of polynomials curves in Bernstein-Bézier form was presented. In Theorem 5 of [13] it was proved that the evaluation of a degree n polynomial with the compensated de Casteljau algorithm provides an approximation f l ( p ( t ) ) verifying
| p ( t ) f l ( p ( t ) ) | u | p ( t ) | + 2 γ 3 n 2 S B n ( p ( t ) ) .
According to the previous bound for problems where
2 γ 3 n 2 S B n ( p ( t ) ) | p ( t ) | < u ,
the relative error for the approximations provided by the compensated de Casteljau algorithm is u.

3. Evaluation Algorithms for Tensor Product Bézier Surfaces

In CAGD ensor product polynomial surfaces are usually represented in the Bernstein-Bézier form (see [21]) by using tensor product Bernstein systems.
Definition 1.
Let B m = ( b 0 m , , b m m ) and B n = ( b 0 n , , b n n ) be two Bernstein systems defined on [ 0 , 1 ] , where b i k and i = 0 , 1 , k , are the Bernstein polynomials of degree k. The system B m B n : = ( b i m ( x ) b j n ( y ) ) i = 0 , , m j = 0 , , n is called a tensor product Bernstein system and the surface
F ( x , y ) = i = 0 m j = 0 n P i j b i m ( x ) b j n ( y ) , ( x , y ) [ 0 , 1 ] × [ 0 , 1 ] ,
is called a tensor product Bézier surface.
A tensor product Bézier surface can be evaluated by de Casteljau type evaluation algorithm inspired in the de Castaljau evaluation algorithm for Bézier curves (see [21]). By considering the components of the points P i j , the evaluation of (7) depends on the evaluation of scalar functions. Hence, based on the de Casteljau algorithm for Bézier curves, the corresponding evaluation algorithm for tensor product Bézier surfaces is shown in Algorithm 5.
Algorithm 5 De Casteljau algorithm for the evaluation of F in (7).
Require: ( x , y ) [ 0 , 1 ] × [ 0 , 1 ] and ( f i j ) i = 0 , j = 0 m , n
Ensure: f 00 m n ( x , y ) i = 0 m j = 0 n f i j b i m ( x ) b j n ( y )
for i = 0 to m do
  for j = 0 to n do
f i j 00 ( x , y ) = f i j
   end for
end for
for i = 0 to m do
  for s = 1 to n do
   for j = 0 to n s do
f i j 0 s ( x , y ) = ( 1 y ) f i j 0 , s 1 ( x , y ) + y f i , j + 1 0 , s 1 ( x , y )
    end for
   end for
end for
for r = 1 : m do
  for i = 0 to m r do
f i 0 r n ( x , y ) = ( 1 x ) f i 0 r 1 , n ( x , y ) + x f i + 1 , 0 r 1 , n ( x , y )
   end for
end for
In Theorem 5 of [7], error analyses of algorithms evaluating tensor product surfaces were performed. Taking into account the roundoff error when computing 1 t , for the particular case of tensor product Bézier surfaces we have the following error analysis of Algorithm 5.
Theorem 2.
Let us consider the system of functions B m n = ( b i m b j n ) i = 0 , , m j = 0 , , n defined on [ 0 , 1 ] × [ 0 , 1 ] . Let  F ( x , y ) be given by (7), and let us suppose that 3 ( m + n ) u < 1 , where u is the unit roundoff. Then, the value F ^ ( x , y ) = f ^ 00 m n computed in floating point arithmetic through Algorithm 5 satisfies
| F ( x , y ) F ^ ( x , y ) | γ 3 ( m + n ) S B m n ( F ( x , y ) ) .

Compensated de Casteljau Evaluation Algorithm for Tensor Product Bézier Surfaces

In this section we devise a compensated de Casteljau algorithm for the evaluation of tensor product surfaces—that is, a compensated version of Algorithm 5. In order to track the local errors at each step, the following EFTs will be used:
[ r ^ y , ρ y ] = T w o S u m ( 1 , y ) , [ r ^ x , ρ x ] = T w o S u m ( 1 , x ) , [ P 1 , y , π i j 0 s ] = T w o P r o d u c t ( r ^ y , f ^ i j 0 , s 1 ) , [ P 1 , x , π i 0 r n ] = T w o P r o d u c t ( r ^ x , f ^ i 0 r 1 , n ) , [ P 2 , y , σ i j 0 s ] = T w o P r o d u c t ( y , f ^ i , j + 1 0 , s 1 ) , [ P 2 , x , σ i 0 r n ] = T w o P r o d u c t ( x , f ^ i + 1 , 0 r 1 , n ) , [ f ^ i j 0 s , ξ i j 0 s ] = T w o S u m ( P 1 , y , P 2 , y ) , [ f ^ i 0 r n , ξ i 0 r n ] = T w o S u m ( P 1 , x , P 2 , x ) .
Then we can describe the error in the following way:
l i j 0 s = π i j 0 s + σ i j 0 s + ξ i j 0 s + ρ y · f ^ i j 0 , s 1 , l i 0 r n = π i 0 r n + σ i 0 r n + ξ i 0 r n + ρ x · f ^ i 0 r 1 , n , ( 1 y ) · f ^ i j 0 , s 1 + y · f ^ i , j + 1 0 , s 1 = f ^ i j 0 s + l i j 0 s , ( 1 x ) · f ^ i 0 r 1 , n + x · f ^ i + 1 , 0 r 1 , n = f ^ i 0 r n + l i 0 r n .
Now let us define the global errors at each step as
f i j 0 s = f i j 0 s f ^ i j 0 s , f i 0 r n = f i 0 r n f ^ i 0 r n .
It can be seen that the local error satisfies the following expressions:
f i j 0 s = ( 1 y ) · f i j 0 , s 1 + y · f i , j + 1 0 , s 1 + l i j 0 s , f i 0 r n = ( 1 x ) · f i 0 r 1 , n + x · f i + 1 , 0 r 1 , n + l i 0 r n .
If computations are performed in exact arithmetic:
F ( x , y ) = f ^ 00 m n + f 00 m n .
Taking into account the previous discussion, Algorithm 6 shows the corresponding compensated version of the de Casteljau algorithms for tensor product Bézier surfaces.
Algorithm 6 Compensated de Casteljau algorithm for the evaluation of F in (7).
Require: ( x , y ) [ 0 , 1 ] × [ 0 , 1 ] and ( f i j ) i = 0 , j = 0 m , n
Ensure: r e s i = 0 m j = 0 n f i j b i m ( x ) b j n ( y )
[ r ^ y , ρ y ] = T w o S u m ( 1 , y )
for i = 0 to m do
  for j = 0 to n do
f ^ i j 00 ( x , y ) = f i j
f ^ i j 00 ( x , y ) = 0
   end for
end for
for i = 0 to m do
  for s = 1 to n do
   for j = 0 to n s do
[ P 1 , y , π i j 0 s ] = T w o P r o d u c t ( r ^ y , f ^ i j 0 , s 1 ( x , y ) )
[ P 2 , y , σ i j 0 s ] = T w o P r o d u c t ( y , f ^ i , j + 1 0 , s 1 ( x , y ) )
[ f ^ i j 0 s ( x , y ) , ξ i j 0 s ] = T w o S u m ( P 1 , y , P 2 , y )
l ^ i j 0 s = π i j 0 s σ i j 0 s ξ i j 0 s ρ y f ^ i j 0 , s 1 ( x , y )
f ^ i j 0 s ( x , y ) = l ^ i j 0 s r ^ y f ^ i j 0 , s 1 ( x , y ) y f ^ i , j + 1 0 , s 1 ( x , y )
    end for
   end for
end for
[ r ^ x , ρ x ] = T w o S u m ( 1 , x )
for r = 1 : m   do
  for i = 0 to m r do
[ P 1 , x , π i 0 r n ] = T w o P r o d u c t ( r ^ x , f ^ i 0 r 1 , n ( x , y ) )
[ P 2 , x , σ i 0 r n ] = T w o P r o d u c t ( x , f ^ i + 1 , 0 r 1 , n ( x , y ) )
[ f ^ i 0 r n ( x , y ) , ξ i 0 r n ] = T w o S u m ( P 1 , x , P 2 , x )
l ^ i 0 r n = π i 0 r n σ i 0 r n σ i 0 r n ρ x f ^ i 0 r n
f ^ i 0 r n = l ^ i 0 r n r ^ x f ^ i 0 r 1 , n x f ^ i + 1 , 0 r 1 , n
   end for
end for
r e s = f ^ 00 m n f ^ 00 m n
Now an error analysis of the compensated de Casteljau algorithm for the evaluation of tensor product surfaces (Algorithm 6) will be carried out. First, an auxiliary result will be proved.
Lemma 2.
Let F ( x , y ) = i = 0 m j = 0 n f i j b i m ( x ) b j n ( y ) be a bivariate polynomial and ( x , y ) [ 0 , 1 ] × [ 0 , 1 ] . Then, the de Casteljau algorithm for tensor product surfaces, i.e., Algorithm 5, verifies:
i. 
j = 0 n s | f i j 0 s | b j n s ( y ) j = 0 n s + 1 | f i j 0 , s 1 | b j n s + 1 ( y ) j = 0 n | f i j | b j n s ( y ) , for all i { 0 , 1 , , m } .
ii. 
i = 0 m r | f i 0 r n | b i m r ( x ) i = 0 m r + 1 | f i 0 r 1 , n | b i m r + 1 ( x ) i = 0 m | f i 0 0 n | b i m ( x ) i = 0 m j = 0 n | f i j | b i m ( x ) b j n ( y ) .
Proof. 
i.
Since f i j 0 s = ( 1 y ) f i j 0 , s 1 + y f i , j + 1 0 , s 1 and y [ 0 , 1 ] we have
| f i j 0 s | = ( 1 y ) | f i j 0 , s 1 | + y | f i , j + 1 0 , s 1 | .
By using the recurrence relation of the Bernstein polynomials, b i k ( t ) = ( 1 t ) b i k 1 + t b i 1 k 1 ( t ) , we can derive
j = 0 n s | f i j 0 s | b j n s ( y ) j = 0 n s ( ( 1 y ) | f i j 0 , s 1 | + y | f i , j + 1 0 , s 1 | ) b j n s ( y ) = | f i 0 0 , s 1 | b 0 n s + 1 ( y ) | + j = 1 n s | f i j 0 , s 1 | ( ( 1 y ) b j n j ( y ) + u b j 1 n j ( y ) ) + | f i , n s + 1 0 , s 1 | b n s + 1 n s + 1 ( y ) = j = 0 n s + 1 | f i j 0 , s 1 | b j n s + 1 ( y ) .
By iterating this procedure, we obtain all the inequalities in i.
ii.
Analogous to i.
The error analysis for f ^ 00 m n was already seen in [7] (and recalled in Theorem 2). Hence, let us see how the roundoff errors affect the computation of f ^ 00 m n using floating point arithmetic.
Theorem 3.
Let f ^ 00 m n be the computed value in Algorithm 6 as an approximation of the exact value f 00 m n in (8). If no underflow occurs, then
| f 00 m n f ^ 00 m n | γ 3 ( m + n + 1 ) 2 i = 0 m j = 0 n | f i j | b i m ( x ) b j n ( y )
Proof. 
By formula (8) and using i of Lemma 1, we can prove by induction hypothesis on s { 1 , , n } that
f ^ i j 0 s = f i j 0 s + q = 1 s k = 0 n q l i , j + k 0 q θ 3 ( n + 1 q ) + 2 b k n q ( y ) ,
for i = 0 , 1 , , m and j = 0 , 1 , , n s . Then, analogously, we can also prove by induction hypothesis on r { 1 , , m } that
f ^ i 0 r n = f i 0 r n + s = 1 n k = 0 r j = 0 n s l i + k , j 0 s θ 3 ( n + r + 1 s ) b k r ( x ) b j n s ( y ) + q = 1 r k = 0 r q l i + k , 0 q n θ 3 ( r + 1 q ) + 2 b k r q ( x ) ,
for i = 0 , 1 , m r .
By formula (10) for r = m we can deduce that
f ^ 00 m n = f 00 m n + s = 1 n i = 0 m j = 0 n s l i j 0 s θ 3 ( m + n + 1 s ) b i m ( x ) b j n s ( y ) + r = 1 m i = 0 m r l i 0 r n θ 3 ( m + 1 r ) + 2 b i m r ( x ) .
By Theorem 1 we can derive
| ρ y | u | r y | and | r ^ y | ( 1 + u ) | r y | | π i j 0 s | u | r ^ y · f ^ i j 0 , s 1 | ( u + u 2 ) | f ^ i j 0 , s 1 | · | r y | | σ i j 0 s | u | y · f ^ i , j + 1 0 , s 1 | | ξ i j 0 s | u | r ^ y f ^ i j 0 , s 1 y f ^ i , j + 1 0 , s 1 | = u | r ^ y × f ^ i j 0 , s 1 + y × f ^ i , j + 1 0 , s 1 π i j 0 s σ i j 0 s | ( u + 2 u 2 + u 3 ) | r y | · | f ^ i j 0 , s 1 | + ( u + u 2 ) | y | · | f ^ i , j + 1 0 , s 1 |
By formulas in (12) we deduce that
| l i j 0 s | | π i j 0 s | + | σ i j 0 s | + | ξ i j 0 s | + | f ^ i j 0 , s 1 | · | ρ y | ( 3 u + 3 u 2 + u 3 ) | r y | · | f ^ i j 0 , s 1 | + ( 2 u + u 2 ) | y | · | f ^ i , j + 1 0 , s 1 | 3 u | r y | · | f ^ i j 0 , s 1 | + | y | · | f ^ i , j + 1 0 , s 1 | + O ( u 2 ) .
Analogously we can deduce that
| l i 0 r n | 3 u | r x | · | f ^ i 0 r 1 , n | + | x | · | f ^ i + 1 , 0 r 1 , n | + O ( u 2 ) .
Taking into account that | r y | = r y , | r x | = r x , | x | = x and | y | = y for x , y [ 0 , 1 ] and using the well known recurrence relation for Bernstein polynomials b i k ( t ) = ( 1 t ) b i k 1 ( t ) + t b i 1 k 1 ( t ) , it is derived that
j = 0 n s | r y | · | f ^ i j 0 , s 1 | + | y | · | f ^ i , j + 1 0 , s 1 | b j n s ( y ) = j = 0 n s + 1 | f ^ i j 0 , s 1 | · b j n s + 1 ( y ) and j = 0 m r | r x | · | f ^ i 0 r 1 , n | + | x | · | f ^ i + 1 , 0 r 1 , n | b i m r ( x ) = i = 0 m r + 1 | f ^ i 0 r 1 , n | · b i m r + 1 ( x ) .
By the error analysis of Theorem 2 performed in [7] we have that
f ^ i j 0 s = f i j 0 s ( 1 + θ 3 s ) and f ^ i 0 r n = f i 0 r n ( 1 + θ 3 ( r + n ) ) ,
where θ k is a quantity usual in error analysis satisfying that | θ k | γ k (for more details see Section 2.1 and [14]). Then we can obtain
j = 0 n s + 1 | f ^ i j 0 , s 1 | · b j n s + 1 ( y ) = ( 1 + θ 3 ( n 1 ) ) j = 0 n s + 1 | f i j 0 , s 1 | · b j n s + 1 ( y ) and i = 0 m r + 1 | f ^ i 0 r 1 , n | · b i m r + 1 ( x ) = ( 1 + θ 3 ( m + n 1 ) ) i = 0 m r + 1 | f i 0 r 1 , n | · b i m r + 1 ( x ) .
Then, applying Lemma 1 and that | θ k | γ k , we derive
j = 0 n s + 1 | f ^ i j 0 , s 1 | · b j n s + 1 ( y ) ( 1 + γ 3 ( n 1 ) ) j = 0 n | f ^ i j | · b j n ( y ) and i = 0 m r + 1 | f ^ i 0 r 1 , n | · b i m r + 1 ( x ) ( 1 + γ 3 ( m + n 1 ) ) i = 0 m j = 0 n | f i j | · b i m ( x ) b j n ( y ) .
By (13)–(16) we have
s = 1 n j = 0 n s | l i j 0 s | b j n s ( y ) 3 n u ( 1 + γ 3 ( n 1 ) ) j = 0 n | f i j | b j n ( y ) , r = 1 m i = 0 m r | l i 0 r n | b i m r ( x ) 3 m u ( 1 + γ 3 ( m + n 1 ) ) i = 0 m j = 0 n | f i j | b i m ( x ) b j n ( y ) .
By (11) and taking into account that 3 n u ( 1 + γ 3 ( n 1 ) ) γ 3 n and that 3 m u ( 1 + γ 3 ( m + n 1 ) ) γ 3 ( m + n 1 ) we conclude
| f ^ 00 m n f 00 m n | γ 3 ( m + n ) γ 3 n i = 0 m j = 0 n | f i j | b i m ( x ) b j n s ( y ) + γ 3 ( m + n + 1 ) γ 3 m + 2 i = 0 m j = 0 n | f i j | b i m ( x ) b j n ( y ) .
Taking into account that γ 3 ( m + n ) γ 3 ( m + n + 1 ) and that, by v of Lemma 1, γ 3 m + 2 + γ 3 n γ 3 ( m + n + 1 ) , the result follows. □
Finally, the following result shows the error analysis of the approximation to a tensor product Bézier surface F ( x , y ) obtained with the compensated de Casteljau algorithm (Algorithm 6).
Theorem 4.
Let F ( x , y ) = i = 0 m j = 0 n f i j b i m ( x ) b j n ( y ) be a tensor product Bézier polynomial with f i j R and r e s the approximation of F ( x , y ) computed by Algorithm 6. Then
| F ( x , y ) r e s | u | F ( x , y ) | + γ 3 ( m + n ) + 4 2 S B m n ( F ( x , y ) ) .
Proof. 
By Algorithm 6 we have that
| r e s F ( x , y ) | = | ( f ^ 00 m n f ^ 00 m n ) F ( x , y ) | = | ( 1 + δ ) ( f ^ 00 m n + f 00 m n f 00 m n + f ^ 00 m n ) F ( x , y ) | .
By (9) and taking into account that | δ | u , we deduce
| r e s F ( x , y ) | = | ( 1 + δ ) ( F ( x , y ) f 00 m n + f ^ 00 m n ) F ( x , y ) | u | F ( x , y ) | + ( 1 + u ) | f 00 m n f ^ 00 m n | .
Then, by Theorem 3 we have
| r e s F ( x , y ) | u | F ( x , y ) | + ( 1 + u ) γ 3 ( m + n + 1 ) 2 S B m n ( F ( x , y ) ) .
Since ( 1 + u ) γ 3 ( m + n + 1 ) γ 3 ( m + n ) + 4 we can deduce
| r e s F ( x , y ) | u | F ( x , y ) | + γ 3 ( m + n ) + 4 2 S B m n ( F ( x , y ) )
and the result follows. □
Remark 1.
Assuming that ( 3 ( m + n ) + 4 ) u < 1 , the error bound for the evaluation of tensor product surfaces by the compensated de Casteljau algorithm obtained in the previous theorem is much lower than the error bound corresponding to the usual de Casteljau algorithm in Theorem 2. The assumption ( 3 ( m + n ) + 4 ) u < 1 is typical when working in a CAGD framework. In fact, if
γ 3 ( m + n ) + 4 2 S B m n ( F ( x , y ) ) | F ( x , y ) | < u
the relative error for the approximation provided by the compensated de Casteljau is u.

4. Conclusions

A compensated version of the de Casteljau algorithm for tensor product functions has been presented. This new method is carried out with the usual floating point arithmetic and operations, and it uses only the same working precision as the data. With this framework, the following bound for the relative error of the new compensated method has been provided:
u + γ 3 ( m + n ) + 4 2 S B m n ( F ( x , y ) ) | F ( x , y ) | ,
which is lower than the bound corresponding to the usual method
γ 3 ( m + n ) S B m n ( F ( x , y ) ) | F ( x , y ) | .
Hence, the new compensated de Casteljau algorithm for tensor product functions can be quite useful for problems with ill-conditioned situations.

Funding

This work was funded by the Spanish research grant PGC2018-096321-B-I00 (MCIU/AEI), by Gobierno de Aragón (E41-17R) and Feder 2014–2020 “Construyendo Europa desde Aragón”.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
CAGDComputer-aided geometric design
EFTError-free transformation

References

  1. Wei, F.; Zhou, F.; Feng, J. Survey of real root finding of univariate polynomial equation in CAGD/CG. J. Comput.-Aided Des. Comput. Graph. 2011, 23, 193–207. [Google Scholar]
  2. McNamee, J.M. Numerical Methods for Roots of Polynomials: Part 1: Volume 14, (Studies in Computational Mathematics); Elsevier: Amsterdam, The Netherlands, 2007. [Google Scholar]
  3. Delgado, J.; Peña, J.M. Running Relative Error for the Evaluation of Polynomials. SIAM J. Sci. Comput. 2009, 31, 3905–3921. [Google Scholar] [CrossRef]
  4. Farouki, R.T.; Rajan, V.T. On the numerical condition of polynomials in Bernstein form. Comput. Aided Geom. Des. 1987, 4, 191–216. [Google Scholar] [CrossRef]
  5. Farouki, R.T.; Goodman, T.N.T. On the optimal stability of the Bernstein basis. Math. Comp. 1996, 65, 1553–1566. [Google Scholar] [CrossRef] [Green Version]
  6. Mainar, E.; Peña, J.M. Error analysis of corner cutting algorithms. Numer. Algorithms 1999, 22, 41–52. [Google Scholar] [CrossRef]
  7. Delgado, J.; Peña, J.M. Error analysis of efficient evaluation algorithms for tensor product surfaces. J. Comput. Appl. Math. 2008, 219, 156–169. [Google Scholar] [CrossRef] [Green Version]
  8. Delgado, J.; Peña, J.M. Algorithm 960: POLYNOMIAL: An Object-Oriented Matlab Library of Fast and Efficient Algorithms for Polynomials. ACM Trans. Math. Softw. 2016, 42, 19. [Google Scholar] [CrossRef]
  9. Ogita, T.; Rump, S.M.; Oishi, S. Accurate sum and dot product. SIAM J. Sci. Comput. 2005, 26, 1955–1988. [Google Scholar] [CrossRef] [Green Version]
  10. Rump, S.M.; Ogita, T.; Oishi, S. Accurate Floating-Point Summation Part I: Faithful Rounding. SIAM J. Sci. Comput. 2008, 31, 189–224. [Google Scholar] [CrossRef] [Green Version]
  11. Rump, S.M.; Ogita, T.; Oishi, S. Accurate floating-point summation part II: Sign, K-fold faithful and rounding to nearest. SIAM J. Sci. Comput. 2008, 31, 1269–1302. [Google Scholar] [CrossRef] [Green Version]
  12. Graillat, S.; Langlois, P.; Louvet, N. Algorithms for accurate, validated and fast polynomial evaluation. Jpn. J. Ind. Appl. Math. 2009, 26, 191–214. [Google Scholar] [CrossRef] [Green Version]
  13. Jiang, H.; Li, S.; Cheng, L.; Su, F. Accurate evaluation of a polynomial and its derivative in Bernstein form. Comput. Math. Appl. 2010, 60, 744–755. [Google Scholar] [CrossRef] [Green Version]
  14. Higham, N.J. Accuracy and Stability of Numerical Algorithms, 2nd ed.; SIAM: Philadelphia, PA, USA, 2002. [Google Scholar]
  15. Lyche, T.; Peña, J.M. Optimally stable multivariate bases. Adv. Comput. Math. 2004, 20, 149–159. [Google Scholar] [CrossRef]
  16. Peña, J.M. On the optimal stability of bases of univariate functions. Numer. Math. 2002, 91, 305–318. [Google Scholar] [CrossRef]
  17. Peña, J.M. A note on the optimal stability of bases of univariate functions. Numer. Math. 2006, 103, 151–154. [Google Scholar] [CrossRef]
  18. Knuth, D.E. The Art of Computer Programming, Volume 2: Seminumerical Algorithms; Addison Wesley: Boston, MA, USA, 1969. [Google Scholar]
  19. Dekker, T.J. A floating-point technique for extending the available precision. Numer. Math. 1971, 18, 224–242. [Google Scholar] [CrossRef] [Green Version]
  20. Louvet, N. Algorithmes Compensés en Arithmétique Flottante: Précision, Validation, Performances. Ph.D. Thesis, University of Perpignan, Perpignan, France, 2007. [Google Scholar]
  21. Farin, G. Curves and Surfaces for Computer Aided Geometric Design, 5th ed.; Academic Press: San Diego, CA, USA, 2002. [Google Scholar]
  22. Langlois, P.H.; Louvet, N.; Graillat, S. Compensated Horner Scheme; Technical Report RR2005-04; Université de Perpignan Via Domitia: Perpignan, France, 2005. [Google Scholar]
  23. Langlois, P.H.; Louvet, N. How to Ensure a Faithful Polynomial Evaluation with the Compensated Horner Algorithm. In Proceedings of the 18th IEEE Symposium on Computer Arithmetic (ARITH’07), Montepellier, France, 25–27 June 2007; pp. 141–149. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Delgado Gracia, J. Compensated Evaluation of Tensor Product Surfaces in CAGD. Mathematics 2020, 8, 2219. https://doi.org/10.3390/math8122219

AMA Style

Delgado Gracia J. Compensated Evaluation of Tensor Product Surfaces in CAGD. Mathematics. 2020; 8(12):2219. https://doi.org/10.3390/math8122219

Chicago/Turabian Style

Delgado Gracia, Jorge. 2020. "Compensated Evaluation of Tensor Product Surfaces in CAGD" Mathematics 8, no. 12: 2219. https://doi.org/10.3390/math8122219

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop