Next Article in Journal
Design of a Multi-Layer Symmetric Encryption System Using Reversible Cellular Automata
Previous Article in Journal
Graph-Based Feature Crossing to Enhance Recommender Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Accurate Computations with Generalized Pascal k-Eliminated Functional Matrices

by
Jorge Delgado
1,†,
Héctor Orera
2,*,† and
Juan Manuel Peña
2,†
1
Departamento de Matemática Aplicada, Universidad de Zaragoza, 50018 Zaragoza, Spain
2
Departamento de Matemática Aplicada, Universidad de Zaragoza, 50009 Zaragoza, Spain
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2025, 13(2), 303; https://doi.org/10.3390/math13020303
Submission received: 11 December 2024 / Revised: 7 January 2025 / Accepted: 16 January 2025 / Published: 18 January 2025

Abstract

:
This paper presents an accurate method to obtain the bidiagonal decomposition of some generalized Pascal matrices, including Pascal k-eliminated functional matrices and Pascal symmetric functional matrices. Sufficient conditions to assure that these matrices are either totally positive or inverse of totally positive matrices are provided. In these cases, the presented method can be used to compute their eigenvalues, singular values and inverses with high relative accuracy. Numerical examples illustrate the high accuracy of our approach.

1. Introduction

The famous Pascal’s triangle, formed by the binomial coefficients, appears in many fields of mathematics, including combinatorics and number theory. Triangular and symmetric Pascal matrices arrange the binomial coefficients into matrices that possess many special properties and connections (cf. [1]). Moreover, these matrices have been generalized in several ways (cf. [2,3,4,5,6]). These generalized classes of Pascal matrices also present many applications to very different fields, for example, in signal processing, filter design, probability theory, electrical engineering, or combinatorics, among other fields.
It is also known that Pascal matrices (see [7]) and their generalizations are very ill-conditioned. However, for some generalized Pascal matrices, it has been proved in [8] that many linear algebra computations can be performed with high relative accuracy. These computations include the calculations of all singular values, eigenvalues, their inverses or the solution of some associated linear systems. A value z is calculated with high relative accuracy (HRA) if the relative error of the computed value z ˜ satisfies z z ˜ / z < K u , where K is a positive constant independent of the arithmetic precision and u is the unit round-off (see [9,10]). An algorithm can be carried out with HRA if it does not use subtractions except for initial data, that is, if it only includes products, divisions, sums of numbers of the same sign and sums of numbers of different sign involving only initial data.
Here, we prove that the mentioned linear algebra computations can be performed with HRA for some generalized Pascal matrices of [2,3]. In order to prove these results, we have previously proved that those matrices are totally positive. Let us recall that a matrix is totally positive (TP) if all its minors are non-negative. The class of TP matrices presents applications to many different fields, including combinatorics, differential equations, statistics, mechanics, computer-aided geometric design, economics, approximation theory, biology or numerical analysis (cf. [11,12,13,14,15,16]). Nonsingular TP matrices have a bidiagonal decomposition and, when this decomposition can be obtained with HRA, then one can use the algorithms of [17,18] to perform the mentioned linear algebra computations with HRA. Following this framework, it has been proved that many computations can be carried out with HRA for some subclasses of TP matrices (cf. [8,19,20,21]), and this was our approach.
We will now outline the layout of this paper. In Section 2, we present some basic tools for TP matrices such as Neville elimination or their bidiagonal decomposition. In Section 3, we present some generalized Pascal matrices, we prove that their bidiagonal decompositions can be performed with HRA and we prove in some cases that they are either TP or inverses of TP. In these cases, we guarantee that the mentioned linear algebra computations can be performed with HRA. Finally, Section 4 illustrates the theoretical results by including numerical examples showing the high accuracy of our approach.

2. Totally Positive Matrices and Bidiagonal Decomposition

Given a diagonal matrix D = ( d i j ) 1 i , j n , we denote it by diag ( d 1 , , d n ) , with d i : = d i i for all i = 1 , , n .
Neville elimination (NE) is an alternative procedure to Gaussian elimination. This algorithm produces zeros in a column of a matrix by adding an appropriate multiple of the previous one to each row. For a nonsingular matrix A = ( a i j ) 0 i , j n , NE consists of n steps and leads to the following sequence of matrices:
A = : A ( 0 ) A ˜ ( 0 ) A ( 1 ) A ˜ ( 2 ) A ( n ) = A ˜ ( n ) = U ,
where U is an upper triangular matrix.
The matrix A ˜ ( k ) = ( a ˜ i j ( k ) ) 0 i , j n is obtained from the matrix A ( k ) = ( a i j ( k ) ) 0 i , j n by a row permutation that moves to the bottom rows with a zero entry in column k below the main diagonal. For nonsingular TP matrices, it is always possible to perform NE without row exchanges (see [22]). If a row permutation is not necessary at the k-th step, we have that A ˜ ( k ) = A ( k ) . The entries of A ( k + 1 ) = ( a i j ( k + 1 ) ) 0 i , j n can be obtained from A ˜ ( k ) = ( a ˜ i j ( k ) ) 0 i , j n using the following formula:
a i j ( k + 1 ) = a ˜ i j ( k ) a ˜ i k ( k ) a ˜ i 1 , k ( k ) a ˜ i 1 , j ( k ) , if k j < i n and a ˜ i 1 , k ( k ) 0 , a ˜ i j ( k ) , otherwise ,
for k = 0 , , n 1 . Then, the ( i , j ) pivot of the NE of A is defined as
p i j = a ˜ i j ( j ) , 0 j i n ,
when i = j , we call p i i a diagonal pivot. We define the ( i , j ) multiplier of the NE of A, with 0 j i n , as
m i j = a ˜ i j ( j ) a ˜ i 1 , j ( j ) = p i j p i 1 , j , if a ˜ i 1 , j ( j ) 0 , 0 , if a ˜ i 1 , j ( j ) = 0 .
The multipliers satisfy that
m i j = 0 m h j = 0 h > i .
NE is a very useful method to study TP matrices. In fact, NE can be used to characterize nonsingular TP matrices. In [22], the following characterization of nonsingular TP matrices was provided in terms of NE.
Theorem 1
(Theorem 5.4 of [22]). Let A be a nonsingular matrix. Then, A is TP if and only if there are no row exchanges in the NE of A and U T , and if the pivots of both NE are non-negative.
Nonsingular TP matrices can be expressed as a product of non-negative bidiagonal matrices. The following theorem (see Theorem 4.2 and p. 120 of [13]) introduces this representation, which is called the bidiagonal decomposition.
Theorem 2
(cf. Theorem 4.2 of [13]). Let A = ( a i j ) 0 i , j n be a nonsingular matrix. Then, A is TP if and only if it admits the following representation:
A = F n 1 F n 2 · · · F 0 D G 0 · · · G n 2 G n 1 ,
where D is the diagonal matrix d i a g ( p 00 , , p n n ) with positive diagonal entries and F i , G i are the non-negative bidiagonal matrices given by
F i = 1 0 1 0 1 m i + 1 , 0 1 m n , n i 1 1 , G i = 1 0 1 0 1 m ˜ i + 1 , 0 1 m ˜ n , n i 1 1 ,
for all i { 0 , , n 1 } . If, in addition, the entries m i j and m ˜ i j satisfy
m i j = 0 m h j = 0 h > i , m ˜ i j = 0 m ˜ h j = 0 h > i ,
then the decomposition is unique.
Let us remark that the entries m i j and p i i appearing in the bidiagonal decomposition given by (3) and (4) are the multipliers and diagonal pivots, respectively, corresponding to the NE of A (see Theorem 4.2 of [13] and the comment below it). The entries m ˜ i j are the multipliers of the NE of A T (see p. 116 of [13]).
Bidiagonal decomposition can be used to represent more classes of matrices. The following remark shows which hypotheses of Theorem 2 are sufficient for the uniqueness of a factorization following (3).
Remark 1.
If we consider the factorization given by (3)–(5) without any further requirement than the nonsingularity of D, by Proposition 2.2 of [23], the uniqueness of (3) holds.
In [17], the matrix notation BD ( A ) was introduced to represent the bidiagonal decomposition of a nonsingular TP matrix,
( BD ( A ) ) i j = m i j , if i > j , m ˜ j i , if i < j , p i i , if i = j .
Taking into account Corollary 3.3 of [23], we can deduce the following remark.
Remark 2.
The matrix A is nonsingular TP if and only if BD ( A ) has all its entries non-negative with positive diagonal entries. The matrix A is the inverse of a nonsingular TP matrix if and only if BD ( A ) has positive diagonal entries and non-positive off-diagonal entries.

3. Pascal k-Eliminated Functional Matrices

The Pascal k-eliminated functional matrix with two variables was introduced in [3] as
( P n , k ) i j = i + k j + k x i j y j , i j , 0 , i < j ,
where i , j = 0 , , n and k N { 0 } . We denote the set of positive integers by N .
In [2], an extension of this matrix depending on 2 n variables was introduced based on the following definition: Given the real numbers { t k } with t 0 = 1 , we define the sequence
t [ n ] = t n t [ n 1 ] ,
with n N and t [ 0 ] : = t 0 = 1 . We will also use the notation introduced in [2]
t [ i ] + [ j ] : = t [ i ] t [ j ] and t [ i ] [ j ] : = t [ i ] t [ j ] .
Given two sequences x 1 , , x n and y 1 , , y n of real numbers, the Pascal k-eliminated functional matrix with 2 n variables Φ n , k [ x 1 , , x n ; y 1 , , y n ] , for k N { 0 } , is defined as
( Φ n , k [ x 1 , , x n ; y 1 , , y n ] ) i j = i + k j + k x [ i ] [ j ] y [ i ] + [ j ] ,
for 0 j i n and 0 otherwise. This matrix is an extension of many well-known families of Pascal matrices.
Theorem 3.
Given x 1 , , x n R , y 1 , , y n R and k N { 0 } , let Φ n , k [ x 1 , , x n ; y 1 , , y n ] be the ( n + 1 ) × ( n + 1 ) lower triangular matrix given by (8).
(i) 
If x i , y i 0 for i = 1 , , n , then we have that
BD ( Φ n , k [ x 1 , , x n ; y 1 , , y n ] ) i j = y [ j ] y [ j ] , i = j , i + k i x i y i , i > j , 0 , i < j .
(ii) 
If x 1 y 1 , , x n y n > 0 , then Φ n , k [ x 1 , , x n ; y 1 , , y n ] is a nonsingular TP matrix.
(iii) 
If x 1 y 1 , , x n y n < 0 , then Φ n , k [ x 1 , , x n ; y 1 , , y n ] is the inverse of a nonsingular TP matrix.
Proof. 
Let B : = Φ n , k [ x 1 , , x n ; y 1 , , y n ] = ( b i j ) 0 i , j n be the matrix defined by (8) and let D = : diag y [ k ] x [ k ] 0 k n . Let us define the matrix A : = ( a i j ) 0 i , j n such that B = A D . Hence, the entries of A are given by a i j = i + k j + k x [ i ] y [ i ] .
Let us now apply NE to A. Let us consider A ( t ) = ( a i j ( t ) ) 0 i , j n as the matrix obtained after performing t steps of NE to A. Let us prove by induction that
a i j ( t ) = x [ i ] y [ i ] i + k j + k r = 0 t 1 ( j r ) r = 0 t 1 ( i r ) .
For the first step, t = 1 , we see that the multipliers of the NE are
m i + 1 , 0 = i + k + 1 k x [ i + 1 ] y [ i + 1 ] i + k k x [ i ] y [ i ] = i + k + 1 i + 1 x i + 1 y i + 1 ,
for i = 0 , , n 1 . Then, we perform the first step of NE
a i + 1 , j ( 1 ) = i + k + 1 j + k x [ i + 1 ] y [ i + 1 ] i + k j + k x [ i ] y [ i ] m i + 1 , 0 = i + k + 1 j + k x [ i + 1 ] y [ i + 1 ] i + k j + k x [ i ] y [ i ] i + k + 1 i + 1 x i + 1 y i + 1 = x [ i + 1 ] y [ i + 1 ] i + k + 1 j + k i + k + 1 j + k i j + 1 i + 1 = x [ i + 1 ] y [ i + 1 ] i + k + 1 j + k j i + 1 .
Thus, Formula (10) holds for t = 1 . Now, let us assume that (10) is true and let us check that the formula also holds for the index t + 1 . First, we compute the multipliers for this step of NE:
m i + 1 , t = i + k + 1 t + k x [ i + 1 ] y [ i + 1 ] i + k t + k x [ i ] y [ i ] · r = 0 t 1 ( i r ) r = 0 t 1 ( i + 1 r ) = i + k + 1 i + 1 x i + 1 y i + 1 , i = t , , n 1 .
Now, let us perform the t + 1 step of the NE:
a i + 1 , j ( t + 1 ) = x [ i + 1 ] y [ i + 1 ] i + 1 + k j + k r = 0 t 1 ( j r ) r = 0 t 1 ( i + 1 r ) x [ i ] y [ i ] i + k j + k r = 0 t 1 ( j r ) r = 0 t 1 ( i r ) m i + 1 , t = x [ i + 1 ] y [ i + 1 ] i + 1 + k j + k r = 0 t 1 ( j r ) r = 0 t 1 ( i + 1 r ) x [ i ] y [ i ] i + k j + k r = 0 t 1 ( j r ) r = 0 t 1 ( i r ) i + k + 1 i + 1 x i + 1 y i + 1 = x [ i + 1 ] y [ i + 1 ] i + 1 + k j + k r = 0 t 1 ( j r ) r = 0 t 1 ( i + 1 r ) 1 i j + 1 i t + 1 .
Hence, we conclude that
a i + 1 , j ( t + 1 ) = x [ i + 1 ] y [ i + 1 ] i + 1 + k j + k r = 0 t 1 j r r = 0 t 1 i + 1 r · j t i t + 1
and that (10) holds for t + 1 . Therefore, we have
BD ( A ) i j = x [ j ] y [ j ] , i = j , i + k i x i y i , i > j , 0 , i < j .
Since B = A D , we can deduce BD ( B ) from (11). Since we know the bidiagonal decomposition of A, i.e., A = F n 1 F 0 D ^ with the multipliers and diagonal pivots given by (11) when i > j and i = j , respectively, we see that
B = A D = F n 1 F 0 D ^ D = F n 1 F 0 ( D ^ D ) .
Hence, we have that the off-diagonal entries of the BD ( B ) are equal to the off-diagonal entries of BD ( A ) and that ( BD ( B ) ) i i = x [ i ] y [ i ] y [ i ] x [ i ] = y [ i ] y [ i ] for i = 0 , , n . Therefore, by the uniqueness of the bidiagonal decomposition, we conclude that (9) holds.
For ( i i ) , it is straightforward to check that all the nonzero entries of the bidiagonal decomposition of BD ( Φ n , k [ x 1 , , x n ; y 1 , , y n ] ) are non-negative whenever x i y i > 0 for all i = 1 , , n . Moreover, the diagonal pivots are strictly positive since y i 0 for all i = 1 , , n . Hence, Φ n , k [ x 1 , , x n ; y 1 , , y n ] is a TP matrix by Remark 2.
Finally, with a proof analogous to that of ( i i ) , ( i i i ) also holds.    □
The cases described in ( i i ) and ( i i i ) of the previous theorem also provide an accurate representation of Φ n , k [ x 1 , , x n ; y 1 , , y n ] that can be used to achieve accurate computations, as the following corollary shows.
Corollary 1.
Given x i , y i 0 for i = 1 , , n , we can compute (9) with HRA. Moreover, if either the hypotheses of (ii) or of (iii) of Theorem 3 hold, then the following computations for the matrix defined by (8) can be performed with HRA: all the eigenvalues and singular values, the inverse and the solution of the linear systems whose independent term has alternating signs.
Proof. 
The first part of the result follows from the fact that (9) can be obtained without subtractions. If the hypotheses of Theorem 3  ( i i ) hold, then the matrix is nonsingular TP and the construction of its bidiagonal decomposition with HRA assures that the linear algebra problems mentioned in the statement of this corollary can be performed to HRA with the algorithms from [17,18]. Finally, if the hypotheses of Theorem 3  ( i i i ) hold, then the matrix is inverse of a TP matrix and Section 3.2 of [23] shows how the linear algebra problems mentioned in the statement of this corollary can also be performed to HRA.    □
Let us recall that the symmetric Pascal matrix P n is the ( n + 1 ) × ( n + 1 ) matrix such that
( P n ) i j = i + j j , for all 0 i , j n .
It is a well-known and interesting result that the bidiagonal decomposition of the symmetric Pascal matrix is formed by all ones (see, for example, [7]).
Proposition 1.
Let P n = ( p i j ) 0 i , j n be the symmetric Pascal matrix whose entries are given by (12). Then, we have that
( BD ( P n ) ) i j = 1 f o r a l l 0 i , j n .
Let us now consider the symmetric Pascal matrix with 2 n variables Ψ n [ x 1 , , x n ; y 1 , , y n ] ,
( Ψ n [ x 1 , , x n ; y 1 , , y n ] ) i j = i + j j x [ i ] [ j ] y [ i ] + [ j ] ,
for 0 i , j n . From the bidiagonal decomposition of the symmetric Pascal matrix given in Proposition 1, we can obtain the bidiagonal decomposition of the wider class of matrices considered in this paper, as it is shown in the following result. Let us now obtain the bidiagonal decomposition of the symmetric Pascal matrix with 2 n variables.
Theorem 4.
Let Ψ n [ x 1 , , x n ; y 1 , , y n ] be the matrix defined by (14). Then, we have that
(i) 
If x i , y i 0 for i = 1 , , n , then
BD ( Ψ n [ x 1 , , x n ; y 1 , , y n ] ) i j = y [ j ] y [ j ] , i = j , x i y i , i > j , y j x j , i < j .
(ii) 
If x 1 y 1 , , x n y n > 0 , then Ψ n [ x 1 , , x n ; y 1 , , y n ] is a nonsingular TP matrix.
(iii) 
If x 1 y 1 , , x n y n < 0 , then Ψ n [ x 1 , , x n ; y 1 , , y n ] is the inverse of a nonsingular TP matrix.
Proof. 
Let B : = Ψ n [ x 1 , , x n ; y 1 , , y n ] be the matrix defined by (14). By its definition, we have the following factorization for this matrix:
B = diag ( x [ i ] y [ i ] ) 0 i n P n diag y [ i ] / x [ i ] 0 i n ,
where P n is the symmetrical Pascal matrix such that ( P n ) 0 i , j n = i + j j . By (13), we can write P n = F ^ n F ^ 1 D G ^ 1 G ^ n , where F ^ i and G ^ i are bidiagonal matrices defined by (4) whose nonzero entries are all ones. The diagonal matrix D ^ reduces to the identity matrix in this case. Hence, we can rewrite (16) as
B = diag ( x [ i ] y [ i ] ) 0 i n F ^ n F ^ 1 G ^ 1 G ^ n diag y [ i ] / x [ i ] 0 i n .
In (17), we have a representation of B that relates to its bidiagonal decomposition. In order to retrieve BD ( B ) from it, we need to move the diagonal matrices so that they appear in the center of the formula, between the matrices F ^ i and the matrices G ^ i . Let us first compute the bidiagonal matrices F n , , F 1 that satisfy the following:
diag ( x [ i ] y [ i ] ) 0 i n F ^ n 1 F ^ 0 = F n 1 F 0 diag ( x [ i ] y [ i ] ) 0 i n .
For that, let us pay attention to the relationship D F ^ i = F i D for any diagonal matrix D = diag ( d i ) 0 i n and F i . Whenever d k 0 for all k, the previous equation is equivalent to D F ^ i D 1 = F i . Hence, the diagonal entries of F i are equal to those of F ^ i (all ones) and a nonzero off-diagonal entry at the position ( k + 1 , k ) is multiplied by d k + 1 d k . Hence, we have that the nonzero off-diagonal entries of F i are x [ k + 1 ] y [ k + 1 ] x [ k ] y [ k ] = x k + 1 y k + 1 for k = i , , n 1 .
Now let us compute the bidiagonal matrices G 1 , , G n that verify
G ^ 1 G ^ n diag y [ i ] / x [ i ] 0 i n = diag y [ i ] / x [ i ] 0 i n G 1 G n .
Let us notice that we can use the same strategy for this case. If we consider the equation D 1 G ^ i D = F i , we have once again that the diagonal entries of G i are ones and that the off-diagonal ( k , k + 1 ) entry of G ^ is now multiplied by d k + 1 d k . Thus, we have that the nonzero off-diagonal entries of G i are y [ k + 1 ] x [ k ] y [ k ] x [ k + 1 ] = y k + 1 x k + 1 , for k = i , , n 1 .
Now, rewriting B in terms of the bidiagonal matrices F i and G i , we see that
B = F n F 1 diag ( x [ i ] y [ i ] ) 0 i n diag y [ i ] / x [ i ] 0 i n G 1 G n .
Therefore, by the uniqueness of the bidiagonal decomposition, we conclude that B = F n F 1 D G 1 G n with D : = diag ( x [ i ] y [ i ] ) 0 i n diag y [ i ] / x [ i ] 0 i n = diag ( y [ i ] y [ i ] ) 0 i n and (15) holds.
For ( i i ) , it is straightforward to check that all the entries of the bidiagonal decomposition of BD ( Ψ n [ x 1 , , x n ; y 1 , , y n ] ) are non-negative whenever x i y i > 0 for all i = 1 , , n . Furthermore, the diagonal pivots are all strictly positive since y i 0 for all i = 1 , , n . Then, we conclude that Ψ n [ x 1 , , x n ; y 1 , , y n ] is a nonsingular TP matrix by Remark 2. Moreover, with a proof analogous to that of ( i i ) , ( i i i ) also holds.    □
As we did previously with the Pascal k-eliminated functional matrices, in the cases ( i i ) and ( i i i ) of Theorem 4 we presented values of the parameters for which an accurate representation of the matrix Ψ n [ x 1 , , x n ; y 1 , , y n ] can be obtained and used to achieve computations with HRA. We state this property in the following corollary:
Corollary 2.
Given x i , y i 0 for i = 1 , , n , we can compute (15) with HRA. Moreover, if either the hypotheses of ( i i ) or of ( i i i ) of Theorem 4 hold, then the following computations for the matrix defined by (14) can be performed with HRA: all the eigenvalues and singular values, the inverse and the solution of the linear systems whose independent term has alternating signs.
Proof. 
The first part of the result follows from the fact that (15) can be obtained without subtractions. If the hypotheses of Theorem 4  ( i i ) hold, then the matrix is nonsingular TP and the construction of its bidiagonal decomposition with HRA assures that the linear algebra problems mentioned in the statement of this corollary can be performed to HRA with the algorithms from [17,18]. Finally, if the hypotheses of Theorem 4  ( i i i ) hold, then the matrix is inverse of a TP matrix, and Section 3.2 of [23] shows how the linear algebra problems mentioned in the statement of this corollary can also be performed to HRA.    □

4. Numerical Experiments

As has been pointed out in the proofs of Corollaries 1 and 2, if the bidiagonal decomposition BD ( A ) of a nonsingular TP matrix A can be constructed with HRA, then the following linear algebra problems can be solved to HRA with the algorithms from [17,18,24]:
  • Computation of all the eigenvalues and singular values of A.
  • Computation of the inverse A 1 .
  • Computation of the solution of linear systems A x = b where b has an alternating pattern of signs.
In [25], the software library TNTool (version January 2018) containing an implementation of the four algorithms mentioned above for Matlab/Octave is available. The corresponding functions of the software library for solving those problems are TNEigenValues, TNSingularValues, TNInverseExpand and TNSolve. By using this software library, several numerical experiments were carried out to illustrate the accuracy of the bidiagonal decompositions of both generalized Pascal matrices presented in this work. In this article, we used Matlab R2023b for the numerical experiments presented.
Remark 3.
The bidiagonal decompositions of the generalized Pascal matrices considered in this paper, BD ( Φ n , k [ x 1 , , x n ; y 1 , , y n ] ) and BD ( Ψ n [ x 1 , , x n ; y 1 , , y n ] ) (given by (9) and (15), respectively), can be obtained via HRA with a computational cost of O ( n ) elementary operations. Then, the functionTNSolvesolve linear systems of equations with these generalized Pascal matrices with a computational cost of O ( n 2 ) elementary operations. Analogously, for the case of the inverse,TNInverseExpandwill provide it also with a computational cost of O ( n 2 ) elementary operations. The computation of the eigenvalues and singular values of these matrices with TNEigenValues  and  TNSingularValues  needs O ( n 3 ) elementary operations.

4.1. Example 1

For the first example, the matrices Φ n , 1 [ x 1 , , x n ; y 1 , , y n ] defined by (8) of orders n + 1 = 5 , 10 , , 60 were considered for the case where
x = ( 1 , 2 , , n ) and y = ( 1 , 2 , , n ) .
First, the singular values of the considered matrices, σ 1 n + 1 > σ 2 n + 1 > > σ n + 1 n + 1 , were computed with Mathematica by using a 200 digit precision. Then, these singular values were obtained with Matlab in two different ways. The first one was acquired by using the Matlab function svd. The second one was obtained by using the function TNSingularValues of the software library TNTool (see [25]). Figure 1 shows the singular values for the case where n + 1 = 20 . The differences between the singular values computed with T N S i n g u l a r V a l u e s and the ones obtained with svd for the case n + 1 = 20 can be observed. Therefore, the obtained approximations are quite different, except for the greater singular values.
Then, the relative errors | σ ^ n + 1 n + 1 σ n + 1 n + 1 | | σ n + 1 n + 1 | of the approximations σ ^ n + 1 n + 1 of the obtained minimal singular values σ n + 1 n + 1 were computed considering the singular values provided by Mathematica as exact. It was observed that the lower the singular value is, the greater the relative error is for the usual standard method. Figure 2 shows these relative errors for the minimal singular values σ n + 1 n + 1 , n + 1 = 5 , 10 , , 60 , obtained in Matlab via these two different ways (svd and TNSingularValues). The figures are shown using a logarithmic scale for the Y-axis. Hence, when a relative error is zero for a certain n + 1 , the line for that value does not appear (all the figures in this work showing relative errors use a logarithmic scale for the Y-axis). It can be observed that the results calculated with the new HRA algorithms (the algorithm obtaining the HRA bidiagonal decomposition of the matrix together with TNSingularValues) are very accurate in contrast to the poor results obtained with the standard algorithm.
Next, the systems of linear equations Φ n , k [ x 1 , , x n ; y 1 , , y n ] X n = b n for n + 1 = 5 , 10 , , 60 were considered, where b n is the vector whose entries has an alternating pattern of signs. Moreover, its absolute values were randomly generated as integers in the interval [ 1 , 1000 ] . The systems were solved with Mathematica using 200 digits of precision, and the computed results were considered to be exact. Then, we solved the systems with Matlab in two different ways, like in the case of the singular values:
1.
Using TNSolve and the bidiagonal decomposition of Φ n , k [ x 1 , , x n ; y 1 , , y n ] to HRA.
2.
Using the Matlab operator \.
Then, the relative errors X n X ^ n 2 X n 2 of the solutions obtained with Matlab X ^ n were calculated, where X n are the solutions obtained with Mathematica. Figure 3 shows the results.
In the case of inverses ( Φ n , k [ x 1 , , x n ; y 1 , , y n ] ) 1 , they were first obtained with Mathematica using a 200 digit precision. Then, they were calculated with Matlab in two ways. Firstly, using TNInverseExpand with the new HRA biadiagonal decomposition. Secondly, using the standard Matlab command inv. Then, component-wise relative errors corresponding to the approximations obtained using Matlab were computed, taking the results of Mathematica as being exact. Figure 4 shows the mean and the maximum of these component-wise relative errors.
Taking into account the numerical results, we can see that the new methods introduced in this work outperform the standard algorithms for the three algebra problems that were considered.

4.2. Example 2

For this second example, the symmetric Pascal matrices Ψ n [ x 1 , , x n ; y 1 , , y n ] defined by (14), for n + 1 = 5 , , 60 , in the case where x and y are given by (18), were considered. Since these matrices are not triangular, their eigenvalues were also computed with Matlab in two ways:
1.
With the usual Matlab command eig.
2.
With TNEigenValues of the library TNTool using the bidiagonal decomposition of the matrices to HRA given in Theorem 4.
Figure 5 shows the eigenvalues λ 1 20 > λ 2 20 > > λ 20 20 computed in these two ways for the symmetric Pascal matrix Ψ 19 [ x 1 , , x 19 ; y 1 , , y 19 ] . It can be observed than the approximation to the greater eigenvalues obtained with both methods are very similar, whereas the approximations to the lower eigenvalues are quite different. In fact, the eigenvalues of a nonsingular totally positive matrix are positive real numbers, and the eigenvalues λ 8 20 , λ 9 20 , λ 19 20 , λ 20 20 of Ψ 19 [ x 1 , , x 19 ; y 1 , , y 19 ] obtained with Matlab eig function are either negative real numbers or even complex numbers with a negative real part.
Then, the relative errors for the minimal eigenvalues of the considered matrices were calculated, taking the minimal eigenvalues provided by Mathematica with a 200-digit precision as exact. Figure 6 shows these relative errors. It can be observed that the approximations of the eigenvalues obtained with TNEigenValues and the new bidiagonal decomposition are much better than those obtained with the standard Matlab eig command.
In addition, the same numerical tests of Example 1 were carried out for the symmetric Pascal matrices considered now. Figure 7 shows the approximations to the singular values σ 1 20 > σ 2 20 > σ 20 20 of the matrix Ψ 19 [ x 1 , , x 19 ; y 1 , , y 19 ] . The same conclusion for the case of eigenvalues is obtained, with the lower singular values being more prone to higher rounding errors. Thus, Figure 8 shows the relative errors for the minimal singular values of matrices.
For the cases of linear systems of equations, 2-norm relative errors can be seen in Figure 9.
Finally, the mean and maximum component-wise relative errors for the computation of the inverses of symmetric Pascal matrices Ψ n [ x 1 , , x n ; y 1 , , y n ] can be seen in Figure 10.

5. Conclusions

Pascal k-eliminated functional matrices and Pascal symmetric functional matrices were studied previously in the literature. In this paper, we obtained the bidiagonal decomposition of these generalized Pascal matrices. Appropriate conditions, provided that these matrices were either totally positive or inverse of totally positive matrices, were found. In those cases, the bidiagonal decomposition can be performed with high relative accuracy. Consequently, many other linear algebra calculations with these matrices can be computed with high relative accuracy, for example, the calculation of their eigenvalues, singular values, inverses and of some associated linear systems. The high relative accuracy of the presented method was illustrated with some numerical examples.

Author Contributions

Conceptualization, J.D., H.O. and J.M.P.; Methodology, J.D., H.O. and J.M.P.; Software, J.D., H.O. and J.M.P.; Writing—original draft, J.D., H.O. and J.M.P.; Writing—review & editing, J.D., H.O. and J.M.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially supported through the Spanish research grants PID2022-138569NB-I00 and RED2022-134176-T (MCIU/AEI), and by Gobierno de Aragón (E41_23R).

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
TPTotally positive
HRAHigh relative accuracy

References

  1. Edelman, A.; Strang, G. Pascal Matrices. Am. Math. Mon. 2004, 111, 189–197. [Google Scholar] [CrossRef]
  2. Bayat, M. Generalized Pascal k-eliminated functional matrix with 2n variables. Electron. J. Linear Algebra 2011, 22, 419–429. [Google Scholar]
  3. Bayat, M.; Faal, H.T. Pascal k-eliminated functional matrix and its property. Linear Algebra Appl. 2000, 308, 65–75. [Google Scholar] [CrossRef]
  4. Lv, X.-G.; Huang, T.-Z.; Ren, Z.-G. A new algorithm for linear systems of the Pascal type. J. Comput. Appl. Math. 2009, 225, 309–315. [Google Scholar] [CrossRef]
  5. Zhang, Z. The linear algebra of the generalized Pascal matrix. Linear Algebra Appl. 1997, 250, 51–60. [Google Scholar] [CrossRef]
  6. Zhang, Z.; Liu, M. An extension of the generalized Pascal matrix and its algebraic properties. Linear Algebra Appl. 1998, 271, 169–177. [Google Scholar] [CrossRef]
  7. Alonso, P.; Delgado, J.; Gallego, R.; Peña, J.M. Conditioning and accurate computations with Pascal matrices. J. Comput. Appl. Math. 2013, 252, 21–26. [Google Scholar] [CrossRef]
  8. Delgado, J.; Orera, H.; Peña, J.M. Accurate bidiagonal decomposition and computation with generalized Pascal matrices. J. Comput. Appl. Math. 2021, 391, 113443. [Google Scholar] [CrossRef]
  9. Demmel, J.; Dumitriu, I.; Holtz, O.; Koev, P. Accurate and efficient expression evaluation and linear algebra. Acta Numer. 2008, 17, 87–145. [Google Scholar] [CrossRef]
  10. Demmel, J.; Gu, M.; Eisenstat, S.; Slapnicar, I.; Veselic, K.; Drmac, Z. Computing the singular value decomposition with high relative accuracy. Linear Algebra Appl. 1999, 299, 21–80. [Google Scholar] [CrossRef]
  11. Ando, T. Totally positive matrices. Linear Algebra Appl. 1987, 90, 165–219. [Google Scholar] [CrossRef]
  12. Gantmacher, F.P.; Krein, M.G. Oscillation Matrices and Kernels and Small Vibrations of Mechanical Systems, Revised ed.; AMS Chelsea Publishing: Providence, RI, USA, 2002. [Google Scholar]
  13. Gasca, M.; Peña, J.M. On factorizations of totally positive matrices. In Total Positivity and Its Applications; Gasca, M., Micchelli, C.A., Eds.; Kluver Academic Publishers: Dordrecht, The Netherlands, 1996; pp. 109–130. [Google Scholar]
  14. Gasca, M.; Micchelli, C.A. (Eds.) Total Positivity and Its Applications, Volume 359 of Mathematics and Its Applications; Kluwer Academic Publishers Group: Dordrecht, The Netherlands, 1996. [Google Scholar]
  15. Karlin, S. Total Positivity; Stanford University Press: Stanford, CA, USA, 1968; Volume I. [Google Scholar]
  16. Pinkus, A. Totally Positive Matrices. Tracts in Mathematics; Cambridge University Press: Cambridge, UK, 2010; Volume 181. [Google Scholar]
  17. Koev, P. Accurate eigenvalues and SVDs of totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 2005, 27, 1–23. [Google Scholar] [CrossRef]
  18. Koev, P. Accurate computations with totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 2007, 29, 731–751. [Google Scholar] [CrossRef]
  19. Demmel, J.; Koev, P. The accurate and efficient solution of a totally positive generalized Vandermonde linear system. SIAM J. Matrix Anal. Appl. 2005, 27, 142–152. [Google Scholar] [CrossRef]
  20. Marco, A.; Martínez, J.J. A fast and accurate algorithm for solving Bernstein-Vandermonde linear systems. Linear Algebra Appl. 2007, 422, 616–628. [Google Scholar] [CrossRef]
  21. Marco, A.; Martínez, J.J.; Viaña, R. Accurate bidiagonal decomposition of totally positive h-Bernstein-Vandermonde matrices and applications. Linear Algebra Appl. 2019, 579, 32. [Google Scholar] [CrossRef]
  22. Gasca, M.; Peña, J.M. Total positivity and Neville Elimination. Linear Algebra Appl. 1992, 165, 25–44. [Google Scholar] [CrossRef]
  23. Barreras, A.; Peña, J.M. Accurate computations of matrices with bidiagonal decomposition using methods for totally positive matrices. Numer. Linear Algebra Appl. 2013, 20, 413–424. [Google Scholar] [CrossRef]
  24. Marco, A.; Martínez, J.J. Accurate computation of the Moore-Penrose inverse of strictly totally positive matrices. J. Comput. Appl. Math. 2019, 350, 299–308. [Google Scholar] [CrossRef]
  25. Koev, P. Available online: https://sites.google.com/sjsu.edu/plamenkoev/home/software/tntool (accessed on 15 January 2025).
Figure 1. Singular values σ i 20 for i = 1 , 2 , , 20 .
Figure 1. Singular values σ i 20 for i = 1 , 2 , , 20 .
Mathematics 13 00303 g001
Figure 2. Relative errors when computing the singular values σ n + 1 n + 1 for n + 1 = 5 , 10 , , 60 .
Figure 2. Relative errors when computing the singular values σ n + 1 n + 1 for n + 1 = 5 , 10 , , 60 .
Mathematics 13 00303 g002
Figure 3. X n X ^ n 2 X n 2 , n + 1 = 5 , 10 , , 60 , when solving Φ n , k [ x 1 , , x n ; y 1 , , y n ] X n = b n .
Figure 3. X n X ^ n 2 X n 2 , n + 1 = 5 , 10 , , 60 , when solving Φ n , k [ x 1 , , x n ; y 1 , , y n ] X n = b n .
Mathematics 13 00303 g003
Figure 4. Mean and maximum component-wise relative errors when computing the inverse ( Φ n , k [ x 1 , , x n ; y 1 , , y n ] ) 1 .
Figure 4. Mean and maximum component-wise relative errors when computing the inverse ( Φ n , k [ x 1 , , x n ; y 1 , , y n ] ) 1 .
Mathematics 13 00303 g004
Figure 5. Eigenvalues λ i 20 for i = 1 , 2 , , 20 .
Figure 5. Eigenvalues λ i 20 for i = 1 , 2 , , 20 .
Mathematics 13 00303 g005
Figure 6. Relative errors when computing the eigenvalues λ n + 1 n + 1 for n + 1 = 5 , 10 , , 60 .
Figure 6. Relative errors when computing the eigenvalues λ n + 1 n + 1 for n + 1 = 5 , 10 , , 60 .
Mathematics 13 00303 g006
Figure 7. Singular values σ i 20 for i = 1 , 2 , , 20 .
Figure 7. Singular values σ i 20 for i = 1 , 2 , , 20 .
Mathematics 13 00303 g007
Figure 8. Relative errors when computing the singular values σ n + 1 n + 1 for n + 1 = 5 , 10 , , 60 .
Figure 8. Relative errors when computing the singular values σ n + 1 n + 1 for n + 1 = 5 , 10 , , 60 .
Mathematics 13 00303 g008
Figure 9. X n X ^ n 2 X n 2 , n + 1 = 5 , 10 , , 60 , when solving Ψ n [ x 1 , , x n ; y 1 , , y n ] X n = b n .
Figure 9. X n X ^ n 2 X n 2 , n + 1 = 5 , 10 , , 60 , when solving Ψ n [ x 1 , , x n ; y 1 , , y n ] X n = b n .
Mathematics 13 00303 g009
Figure 10. Relative errors when computing the inverse ( Ψ n [ x 1 , , x n ; y 1 , , y n ] ) 1 .
Figure 10. Relative errors when computing the inverse ( Ψ n [ x 1 , , x n ; y 1 , , y n ] ) 1 .
Mathematics 13 00303 g010
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Delgado, J.; Orera, H.; Peña, J.M. Accurate Computations with Generalized Pascal k-Eliminated Functional Matrices. Mathematics 2025, 13, 303. https://doi.org/10.3390/math13020303

AMA Style

Delgado J, Orera H, Peña JM. Accurate Computations with Generalized Pascal k-Eliminated Functional Matrices. Mathematics. 2025; 13(2):303. https://doi.org/10.3390/math13020303

Chicago/Turabian Style

Delgado, Jorge, Héctor Orera, and Juan Manuel Peña. 2025. "Accurate Computations with Generalized Pascal k-Eliminated Functional Matrices" Mathematics 13, no. 2: 303. https://doi.org/10.3390/math13020303

APA Style

Delgado, J., Orera, H., & Peña, J. M. (2025). Accurate Computations with Generalized Pascal k-Eliminated Functional Matrices. Mathematics, 13(2), 303. https://doi.org/10.3390/math13020303

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop