Next Article in Journal
Quantitative Portfolio Management: Review and Outlook
Previous Article in Journal
The Second Critical Exponent for a Time-Fractional Reaction-Diffusion Equation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Univariate Vector-Valued Rational Interpolation and Recovery Problems

1
School of Mathematics, Jilin University, Changchun 130012, China
2
College of Mathematics Science, Inner Mongolia Minzu University, Tongliao 028000, China
3
School of Mathematics and Statistics, Liaoning University, Shenyang 110000, China
4
Key Laboratory of Symbolic Computation and Knowledge Engineering (Ministry of Education), Jilin University, Changchun 130012, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(18), 2896; https://doi.org/10.3390/math12182896
Submission received: 20 August 2024 / Revised: 11 September 2024 / Accepted: 14 September 2024 / Published: 17 September 2024

Abstract

:
In this paper, we consider a novel vector-valued rational interpolation algorithm and its application. Compared to the classic vector-valued rational interpolation algorithm, the proposed algorithm relaxes the constraint that the denominators of components of the interpolation function must be identical. Furthermore, this algorithm can be applied to construct the vector-valued interpolation function component-wise, with the help of the common divisors among the denominators of components. Through experimental comparisons with the classic vector-valued rational interpolation algorithm, it is found that the proposed algorithm exhibits low construction cost, low degree of the interpolation function, and high approximation accuracy.

1. Introduction

Vector-valued rational interpolation, as a classic topic in mathematics, has garnered significant attention due to its extensive applications in various fields, including mechanical vibration, data analysis, automatic control, and image processing, etc. [1,2,3,4,5]. In this paper, we primarily focus on the univariate vector-valued rational interpolation problem. The classical vector-valued rational interpolation can be stated as follows: Given a dataset
{ ( x i , V i ) : x i R , V i = V ( x i ) = ( V 1 ( x i ) , V 2 ( x i ) , , V s ( x i ) ) = ( v i , 1 , , v i , s ) R s , i = 1 , 2 , , n } ,
where each vector value V i is associated with a distinct point x i . We construct a vector-valued rational function
R ( x ) = P ( x ) Q ( x ) ,
such that
R ( x i ) = P ( x i ) Q ( x i ) = V i , i = 1 , 2 , , n ,
where P ( x ) = ( p 1 ( x ) , p 2 ( x ) , , p s ( x ) ) is an s-dimensional vector of polynomials, and Q ( x ) is a real algebraic polynomial.
For a classical univariate vector-valued rational interpolation problem, Graves-Morris [6,7] proposed a Thiele-type algorithm based on the Samelson inverse transformation for vectors. They also proved the characteristic theorem and uniqueness of this kind of vector-valued rational interpolation. Subsequently, Graves-Morris and Jenkins [8] presented a Lagrange determinant algorithm for solving the classical vector-valued rational interpolation problems. Levrie and Bultheel [9] introduced a definition of generalized continued fractions and a Thiele n-fraction algorithm designed for the classical vector-valued rational interpolation. Zhu and Zhu [10] further contributed a recursive algorithm with inheritance properties for this type of interpolation.
Among various methods, the Thiele-type vector-valued rational interpolation algorithm [6] stands out as the most classic and widely used. It is discussed under the assumption of Q ( x ) P ( x ) 2 . However, examples are found for which the rational interpolation functions constructed with this algorithm do not satisfy this assumption [11]. Therefore, this assumption actually limits the applicability of this algorithm and its uniqueness theorem. Furthermore, this algorithm may encounter a computation stall during the computation of inverted differences. In such cases, adjusting points or perturbing vector components becomes necessary for recalculation. This subsequently leads to an increase in computational complexity.
To make up for this deficiency, we propose a new vector-valued rational interpolation algorithm. Recognizing that vector-valued rational interpolation can be seen as a combination of multiple scalar rational interpolation problems, we construct a vector rational function in the form of
R ( x ) = p 1 ( x ) q 1 ( x ) , p 2 ( x ) q 2 ( x ) , , p s ( x ) q s ( x )
to satisfy the interpolation conditions
R ( x i ) = V i , i = 1 , 2 , , n ,
where a non-trivial common divisor may exist between q i ( x ) and q j ( x ) . This modification can expand the applicability of univariate vector-valued rational interpolation.
Based on the Fitzpatrick algorithm for univariate scalar rational interpolation, this paper develops a corresponding vector-valued version. It fully takes the advantage of common divisors among denominators. This algorithm can not only remove restrictions on the denominators of the interpolation function, but also avoid complicated calculations of inverse differences. When solving problems component-wise, we obtain denominators from the calculated components. These denominators are then used to reduce the degree of subsequent components. This inheritance gives the algorithm significant advantages in solving vector-valued rational function recovery problems and certain applications.
The paper is organized as follows: In Section 2, we provide a detailed introduction to the algorithm established for the univariate vector-valued rational interpolation function in the form of (1). In Section 3, we discuss how to handle vector-valued rational function recovery problem through introducing termination conditions. Finally, the application in solving positive-dimensional polynomial systems is considered.

2. Univariate Vector-Valued Rational Interpolation

In this section, we propose a univariate vector-valued rational interpolation algorithm based on the Fitzpatrick algorithm. The recursive property of the Fitzpatrick algorithm enables us to make full use of the common factors in the denominators of the already computed interpolation components, thereby reducing the degrees of the interpolation results for subsequent components. Before detailing our algorithm, we provide a brief overview of the Fitzpatrick algorithm; more details can be found in [12].

2.1. Fitzpatrick Algorithm for Univariate Scalar Rational Interpolation

Let   K be a field of characteristic 0. P = K [ x ] is the univariate polynomial ring over   K . Given a dataset
{ ( x i , v i ) : x i K , v i K , i = 1 , 2 , , n } ,
where each value v i is associated with a distinct point x i . We construct a rational function
r ( x ) = a ( x ) b ( x ) ,
such that
r ( x i ) = a ( x i ) b ( x i ) = v i , i = 1 , 2 , , n ,
where a ( x ) , b ( x ) P and b ( x i ) 0 , for 1 i n . The problem is referred to as the Cauchy-type rational interpolation problem [12].
This problem can be solved with the help of free modules and Gröbner basis methods. Let P 2 be a free module with standard basis vectors e 1 = ( 1 , 0 ) and e 2 = ( 0 , 1 ) . If m = x α e i for i = 1 , 2 , then m is called a monomial in P 2 .
Definition 1. 
For the order  ξ on P 2 , the following holds:
(1) ( x i , 0 ) ξ ( x i , 0 ) if and only if  i < i and  ( 0 , x j ) ξ ( 0 , x j ) if and only if  j < j ;
(2) ( x i , 0 ) ξ ( 0 , x j ) if and only if  i j + ξ ;
where ξ is a given integer.
Fixing a monomial order ξ in P 2 , each ( f , g ) P 2 can be expressed as a K -linear combination of monomials:
( f , g ) = i = 1 t c i m i ,
where c i K , c i 0 , and m 1 ξ m 2 ξ ξ m t . The leading term of ( f , g ) is defined as L T ( f , g ) = c 1 m 1 . Additionally, the degree of the rational function f / g is defined as m ξ ( f , g ) = max { δ f , δ g + ξ } , where δ denotes the degree of the polynomial.
Let I i = x x i , i = 1 , 2 , , n , be the ideals in P . If there exist   a ( x ) , b ( x ) K [ x ] such that
a ( x ) b ( x ) v i 0 mod I i , i = 1 , , n ,
then the pair  ( a ( x ) , b ( x ) ) P 2 is called a weak interpolation for the univariate Cauchy-type rational interpolation problem. It is easy to verify that for each positive integer k, the subset
M k = { ( a ( x ) , b ( x ) ) | a ( x ) b ( x ) v i 0 mod I i , i = 1 , , k }
is a P -submodule. Let M 0 = P 2 . Then, we have M 0 M 1 M n = M . If a fixed monomial order ξ in P 2 is given, and it is known that the Gröbner basis of M 0 is { ( 1 , 0 ) , ( 0 , 1 ) } , then the Gröbner basis of M k for k = 1 , 2 , , n can be recursively computed from that of M 0 with the Fitzpatrick algorithm (Algorithm 1).
Algorithm 1: Fitzpatrick Algorithm [12].
Mathematics 12 02896 i001
    If the Gröbner basis of M is { ( f n , 1 ( x ) , g n , 1 ( x ) ) , ( f n , 2 ( x ) , g n , 2 ( x ) ) } , then every pair ( a ( x ) , b ( x ) ) M can be expressed as
( a ( x ) , b ( x ) ) = c 1 ( f n , 1 ( x ) , g n , 1 ( x ) ) + c 2 ( f n , 2 ( x ) , g n , 2 ( x ) ) ,
where c j P for j = 1 , 2 . For appropriate c j such that b ( x i ) 0 for i = 1 , 2 , , n ,
a ( x ) b ( x ) = c 1 f n , 1 ( x ) + c 2 f n , 2 ( x ) c 1 g n , 1 ( x ) + c 2 g n , 2 ( x )
is a general solution to the univariate Cauchy-type rational interpolation problem.
If g n , 1 ( x i ) 0 for all i = 1 , 2 , , n , then we can choose c 1 = 1 and c 2 = 0 . If there exists an index i such that g n , 1 ( x i ) = 0 , we can select a polynomial with degree equal to m ξ ( f n , 2 , g n , 2 ) m ξ ( f n , 1 , g n , 1 ) as c 1 . Additionally, we can choose an arbitrary c 2 K with the restriction ( c 1 g n , 1 ( x ) + c 2 g n , 2 ( x ) ) | x i 0 for all i = 1 , 2 , , n . If c 1 and c 2 are chosen in this manner, the degree m ξ ( a ( x ) , b ( x ) ) can be minimized [12].

2.2. Fitzpatrick Algorithm for Univariate Vector-Valued Rational Interpolation

In the following, we discuss the univariate vector-valued rational interpolation problem where the denominators of different components share a common divisor, specifically, the vector-valued rational interpolation function in the form of (1).
Firstly, we use classical algorithms, such as the Fitzpatrick algorithm [12], the continued fraction method [13], or methods by solving linear systems of equations [14,15,16], to compute p 1 ( x ) q 1 ( x ) . Then, computation of p k ( x ) q k ( x ) for 2 k s can be conducted as follows. We first select q l ( x ) such that
δ q l ( x ) = max 1 j k 1 { δ q j ( x ) } .
If there are multiple polynomials satisfying (2), we choose the one with the largest index. Assuming that the greatest common divisor of q l ( x ) and q k ( x ) is q ¯ k ( x ) , there exist s k , 1 ( x ) and s k , 2 ( x ) such that q l ( x ) = q ¯ k ( x ) · s k , 1 ( x ) and q k ( x ) = q ¯ k ( x ) · s k , 2 ( x ) . Therefore, p k ( x ) q k ( x ) can be expressed as
p k ( x ) q k ( x ) = s k , 1 ( x ) p k ( x ) q l ( x ) s k , 2 ( x ) .
Let a k ( x ) b k ( x ) = s k , 1 ( x ) p k ( x ) s k , 2 ( x ) = p k ( x ) q k ( x ) · q l ( x ) . Then, we can establish a new rational interpolation problem:
a k ( x ) b k ( x ) | x = x i = v i , k q l ( x i ) , i = 1 , 2 , , n .
For the interpolation problem (3), we denote its corresponding weak interpolation module by M ¯ k = { ( a ˜ ( x ) , b ˜ ( x ) ) | a ˜ ( x ) b ˜ ( x ) v i , k q l ( x i ) 0 mod I i , i = 1 , 2 , , n } , where I i = x x i , i = 1 , 2 , , n . Further, set
n k = δ q l ( x ) + 1 n , h k ( x ) = i = 1 n k v i , k q l ( x i ) t = 1 t i n k x x t x i x t , ω k ( x ) = t = 1 n k ( x x t ) .
It can be easily verified that { ( h k ( x ) , 1 ) , ( ω k ( x ) , 0 ) } is a Gröbner basis for the module
M ¯ k , n k = { ( a ¯ ( x ) , b ¯ ( x ) ) | a ¯ ( x ) b ¯ ( x ) v i , k q l ( x i ) 0 mod I i , i = 1 , , n k } ,
w.r.t. ξ k , where ξ k = δ h k ( x ) . Clearly, M ¯ k M ¯ k , n k , so for each ( a ˜ k ( x ) , b ˜ k ( x ) ) M ¯ k M ¯ k , n k , there exist c k , 1 ( x ) , c k , 2 ( x ) K [ x ] such that
( a ˜ k ( x ) , b ˜ k ( x ) ) = c k , 1 ( x ) · ( h k ( x ) , 1 ) + c k , 2 ( x ) · ( ω k ( x ) , 0 ) .
If c k , 1 ( x ) 0 , the following expression is meaningful:
a ˜ k ( x ) b ˜ k ( x ) = c k , 1 ( x ) h k ( x ) + c k , 2 ( x ) ω k ( x ) c k , 1 ( x ) · 1 .
Noting that ( a ˜ k ( x ) , b ˜ k ( x ) ) M ¯ k , we also have
c k , 1 ( x i ) h k ( x i ) + c k , 2 ( x i ) ω k ( x i ) c k , 1 ( x i ) = a ˜ k ( x i ) b ˜ k ( x i ) = v i , k q l ( x i ) , i = n k + 1 , , n .
Hence we can have
c k , 2 ( x ) c k , 1 ( x ) | x = x i = u i , k = v i , k q l ( x i ) h k ( x i ) ω k ( x i ) , i = n k + 1 , , n .
This represents a simpler rational interpolation problem. We can use the Fitzpatrick algorithm to solve this interpolation problem (6). Assuming that the solution is c ˜ k , 2 ( x ) c ˜ k , 1 ( x ) , we substitute c ˜ k , 2 ( x ) and c ˜ k , 1 ( x ) into (5) and obtain a ˜ k ( x ) b ˜ k ( x ) . Therefore,
p k ( x i ) q k ( x i ) = a ˜ k ( x i ) b ˜ k ( x i ) · 1 q l ( x i ) = c ˜ k , 1 ( x i ) h k ( x i ) + c ˜ k , 2 ( x i ) ω k ( x i ) c ˜ k , 1 ( x i ) q l ( x i ) = v k , i , i = 1 , 2 , , n .
Thus,
p k ( x ) q k ( x ) = c ˜ k , 1 ( x ) h k ( x ) + c ˜ k , 2 ( x ) ω k ( x ) c ˜ k , 1 ( x ) q l ( x )
is the k-th component of the vector-valued rational interpolation function. Further observation shows that, if δ q l ( x ) > 0 , the degree of the rational function c ˜ k , 2 ( x ) c ˜ k , 1 ( x ) is lower than that of p k ( x ) q k ( x ) . This property enables us to obtain a higher-degree rational function p k ( x ) q k ( x ) by computing the lower-degree rational interpolation function c ˜ k , 2 ( x ) c ˜ k , 1 ( x ) . For specific implementation details, see Algorithm 2.
Remark 1. 
Due to the uncertainty of p k ( x ) q k ( x ) , selecting q l ( x ) for maximum degree reduction is challenging. However, once the highest degree q l is chosen, the degree of the greatest common divisor between q k and q l may still be higher than that between q k and other denominators. From this perspective, this selection may not be optimal, but it remains reasonable.
Next, we compare our algorithm with the classical vector-valued rational interpolation algorithm in two aspects: the degree of the interpolation result and the approximation quality (see Examples 1 and 2). Since the majority of vector-valued rational interpolation algorithms are built upon the theory of the Thiele’s algorithm, i.e., the univariate Thiele vector-valued rational interpolation algorithm [6], we chose to compare our approach with the Thiele’s algorithm. To evaluate the accuracy of the interpolation, we introduce the following error metric: Let the interpolation function be R ( x ) = ( r 1 ( x ) , r 2 ( x ) , , r s ( x ) ) and the original function be V ( x ) = ( v 1 ( x ) , v 2 ( x ) , , v s ( x ) ) . Then, the error for the k-th component is defined as e k ( x ) = r k ( x ) v k ( x ) . Consequently, the interpolation error vector can be expressed as
E ( x ) = ( e 1 ( x ) , e 2 ( x ) , , e s ( x ) ) .
In order to quantify the error further, we define the error bound as follows:
ERR = max 1 k s E k ,
where E k = max a x b | e k ( x ) | , for k = 1 , 2 , , s , represents the error bound for the k-th component.
Algorithm 2: Fitzpatrick Algorithm for univariate vector-valued rational interpolation.
Mathematics 12 02896 i002
Example 1. 
Given the vector function x , x 3 and its data at the points x i as shown in Table 1, we seek a rational interpolation function that satisfies the data in the table.
Applying Algorithm 2, we then obtain the interpolation result as follows:
15 ( x 2 + 15 x + 8 ) x 2 + 85 x + 274 , 15 x ( x 2 + 15 x + 8 ) x 2 + 85 x + 274 .
In contrast, the Thiele’s algorithm gives the following interpolation result:
15 ( 352873 x 4 16905050 x 3 + 34618761 x 2 9605868280 x 8404972384 ) 40877 x 4 10728710 x 3 + 514391421 x 2 + 25851333460 x + 243536574152 , 15 ( 2502965 x 4 326370318 x 3 6908154195 x 2 15842123732 x + 5081371200 ) 40877 x 4 10728710 x 3 + 514391421 x 2 + 25851333460 x + 243536574152 .
By (8), the error bound for Algorithm 2 is calculated as E R R 1 = 0.3044778370 × 10 1 , and the error bound for Thiele’s algorithm is E R R 2 = 0.7349562562 × 10 1 . Furthermore, we present a graphical analysis contrasting the error functions of the results produced by both algorithms for each individual component, with detailed visualizations provided in Figure 1. Obviously, for this problem, Algorithm 2 outperforms the univariate Thiele’s algorithm in both solution degree and computational accuracy.
Table 1. Vector-valued rational interpolation data.
Table 1. Vector-valued rational interpolation data.
x i x 1 = 1 x 2 = 4 x 3 = 9 x 4 = 16 x 5 = 25
( v i , 1 , v i , 2 ) ( 1 , 1 ) ( 2 , 8 ) ( 3 , 27 ) ( 4 , 64 ) ( 5 , 125 )
Figure 1. Comparison of interpolation result errors between the two algorithms.
Figure 1. Comparison of interpolation result errors between the two algorithms.
Mathematics 12 02896 g001
Example 2. 
For the vector functions V i ( x ) listed in Table 2, we sample the points at the provided knot sequences (denoted as a : h : b , which means a row vector starting from a and ending at b with step size h). Subsequently, we apply both Algorithm 2 and the univariate Thiele’s algorithm to calculate the interpolation functions and compare their approximation performance.
Using (8), we determine the error bounds with respect to the results of this problem for both Algorithm 2 and the univariate Thiele’s algorithm. These error bounds are presented in the 2nd and 3rd columns of Table 3, while the degrees of the interpolation functions are listed in the 4th and 5th columns. The data indicate that, in terms of both interpolation errors and function degrees, Algorithm 2 typically exhibits superior performance compared to the univariate Thiele’s algorithm for the current problem.
Table 2. Original functions and points.
Table 2. Original functions and points.
i V i ( x ) Knots
1 l n ( x ) x , e x x 0.4:0.4:2.8
2 s i n ( x ) ( x + 1 ) 2 , c o s ( x ) x ( x + 1 ) 0.2:0.2:1.4
3 x 1 ( x + 1 ) 3 , 1 x ( x + 1 ) 2 , x 1 x ( x + 1 ) 2.0:1.0:8.0
4 x s i n ( x ) , c o s ( x ) x , t a n ( x ) 0.2:0.2:1.4
5 l n ( x ) , t a n ( x ) , e x , c o s ( x ) 0.2:0.2:1.4
6 x ( x + 1 ) 2 , x 1 x 2 ( x + 1 ) , 1 x 2 + 1 , x 1 x ( x + 1 ) 3.0:1.0:9.0
7 l n ( x ) x + 1 , x x + 1 , s i n ( x ) , x 2 + x 3 , c o s ( x ) 0.2:0.2:1.4
8 l n ( x 1 ) x 2 + 1 , e x , x 2 + x 3 , l n ( x ) , x 1 x 2 + 1 2.0:1.0:8.0
9 x 3 + x 2 + x , s i n ( x ) , x x + 1 , c o s ( x ) , x 2 + x 3 , e x x + 1 0.2:0.2:1.4
10 x x 2 + x , x x 2 1 , x 3 + 1 , x ( x + 1 ) 2 , x 1 x 2 + x , x 2 + x 2.0:1.0:8.0
Remark 2. 
The computation of max a x b | r k ( x ) v k ( x ) | is performed with built-in functions in Maple.
Example 3. 
Given
v k ( x ) = x 10 ( k 1 ) + 1 x 10 ( k 1 ) ( x 10 + 1 ) , k = 1 , 2 , , 5 ,
let V s ( x ) = ( v 1 ( x ) , , v s ( x ) ) , for s = 2 , 3 , 4 , and 5, be a set of vector-valued rational functions. We take n = 10 , 30 , 50 , and 70, respectively. Subsequently, we apply Algorithm 2 and the univariate Thiele’s algorithm to perform the calculations and record the respective runtime (in seconds) in detail.
For various values of n, the runtime of Algorithm 2 is presented in the 2nd, 4th, 6th, and 8th columns of Table 4, while the runtime of Thiele’s algorithm is shown in the 3rd, 5th, 7th, and 9th columns. It is evident from the data that, for this problem, for the same values of s and n, Algorithm 2 exhibits significantly less runtime than Thiele’s algorithm. Notably, as the number of sampling points, n, increases, the disparity in runtime between the two algorithms becomes increasingly pronounced.
Table 3. Comparison of degrees and errors between two algorithm results.
Table 3. Comparison of degrees and errors between two algorithm results.
FunctionERRDegree
Algorithm 2 Thiele Algorithm 2 Thiele
V 1 ( x ) 4.583114840 × 10 4 9.696897239 × 10 3 [ 5 / 4 ] [ 6 / 6 ]
V 2 ( x ) 9.327249980 × 10 5 1.280059824 × 10 4 [ 5 / 4 ] [ 6 / 6 ]
V 3 ( x ) 5.492107736 × 10 6 1.238846243 × 10 5 [ 6 / 4 ] [ 6 / 6 ]
V 4 ( x ) 1.804812124 × 10 4 3.450842263 × 10 3 [ 6 / 4 ] [ 6 / 6 ]
V 5 ( x ) 5.785323337 × 10 4 6.187137002 × 10 3 [ 6 / 4 ] [ 6 / 6 ]
V 6 ( x ) 5.620278566 × 10 7 4.105675605 × 10 6 [ 6 / 4 ] [ 6 / 6 ]
V 7 ( x ) 1.487813369 × 10 5 2.560689242 × 10 4 [ 6 / 4 ] [ 6 / 6 ]
V 8 ( x ) 2.229500738 × 10 4 1.057572358 × 10 3 [ 6 / 4 ] [ 6 / 6 ]
V 9 ( x ) 6.480739393 × 10 5 1.951852363 × 10 4 [ 6 / 4 ] [ 6 / 6 ]
V 10 ( x ) 4.813018968 × 10 5 3.718845805 × 10 4 [ 6 / 4 ] [ 6 / 6 ]
Table 4. Comparison of runtime between two algorithms.
Table 4. Comparison of runtime between two algorithms.
sn = 10n = 30n = 50n = 70
Algorithm 2 Thiele Algorithm 2 Thiele Algorithm 2 Thiele Algorithm 2 Thiele
20.0150000.0160000.0620001.2650000.32800013.7500001.57800095.891000
30.0160000.0320000.1410002.4220001.39100028.28100010.640000196.828000
40.0310000.0470000.2500004.0780004.10900051.17200040.469000318.484000
50.0320000.0470000.5630004.9680007.93800070.453000108.813000432.188000
Remark 3. 
All computations in this paper are performed on a hardware environment equipped with a 64-bit 1.8 GHz Intel Core i7 processor and 8 GB of RAM, Windows 10 Enterprise operating system and Maple 2016.

3. Univariate Vector-Valued Rational Recovery

In this section, we discuss an application of univariate vector-valued rational interpolation, specifically focusing on the problem of recovering univariate vector-valued rational functions. Let V ( x ) = ( v 1 ( x ) , v 2 ( x ) , , v s ( x ) ) be a univariate vector-valued rational function provided by a black box B : R R s with an unknown specific form. Once some x i R is input, the black box outputs the function value V ( x i ) R s . Suppose the black box function gives sufficient data
{ ( x i , V i ) : x i R ; V i = V ( x i ) = ( v 1 ( x i ) , v 2 ( x i ) , , v s ( x i ) ) = ( v i , 1 , , v i , s ) R s ; i = 1 , 2 , , N } ,
we aim to reconstruct a vector-valued rational function
R ( x ) = p 1 ( x ) q 1 ( x ) , p 2 ( x ) q 2 ( x ) , , p s ( x ) q s ( x ) ,
such that
R ( x i ) = V ( x i ) , i = 1 , 2 , , N ,
where for all 1 i s , δ p i l , δ q i m , and l + m < N . In this case, we consider R ( x ) to be the function obtained by recovering V ( x ) .
A common approach to solving this problem involves two procedures: applying a recursive vector-valued rational interpolation algorithm to compute R k ( x ) for k = 1 , 2 , , N and verifying if it equals V ( x ) at all data points. However, this handling method suffers from computational redundancy.
To reduce computational redundancy, we propose a novel strategy to deal with the recovery of vector-valued rational functions. Specifically, we sample at two distinct sets of points with the black-box function V ( x ) : a computation set J 1 = { x i : i = 1 , 2 , , t } , where t is adjustable; and a validation set J 2 = { x ˜ 1 , x ˜ 2 } , where x ˜ 1 and x ˜ 2 are large prime numbers. On J 1 , we apply a recursive vector-valued rational interpolation algorithm to compute the k-th component p t , k ( x ) q t , k ( x ) of the interpolation function. Subsequently, on J 2 , we verify if p t , k ( x ) q t , k ( x ) equals v k ( x ) . If p t , k ( x ˜ i ) q t , k ( x ˜ i ) = v k ( x ˜ i ) holds for i = 1 , 2 , we consider that the k-th component is successfully recovered and proceed to compute the ( k + 1 ) -th component with the same method. Otherwise, we increase the number of points and continue to compute p t + 1 , k ( x ) q t + 1 , k ( x ) based on p t , k ( x ) q t , k ( x ) . As our algorithm processes each component individually, we can modify Algorithm 2 accordingly to implement this procedure, which yields Algorithm 3, as follows.
Algorithm 3: Fitzpatrick Algorithm for univariate vector-valued rational recovery.
Mathematics 12 02896 i003
Next, we will select several sets of black box functions to test Algorithm 3. Additionally, we will compare its performance with a vector-valued rational recovery algorithm, which is derived from Thiele’s algorithm together with the termination conditions (referred to as the Thiele-type algorithm).
Example 4. 
Let the black box rational function be V ( x ) = x + 1 2 x 2 + 2 , x 1 ( x + 1 ) ( x 2 + 1 ) . We perform rational recovery with Algorithm 3.
(1) Choose two large prime numbers at random. Let x ˜ 1 = 7717 , x ˜ 2 = 7907 , and obtain the function values
V ( x ˜ 1 ) = 3859 59552090 , 1929 114905757655 , V ( x ˜ 2 ) = 1977 31260325 , 3953 247206650100 .
(2) Select points x t = t , t = 1 , 2 , , 20 , and input the points x t sequentially into Algorithm 2 for interpolation. For t = 5 , we have p 5 , 1 ( x ) q 5 , 1 ( x ) = 1 2 · x + 1 x 2 + 1 . One can easily check that p 5 , 1 ( x ˜ 1 ) q 5 , 1 ( x ˜ 1 ) = 3859 59552090 = v ˜ 1 , 1 and p 5 , 1 ( x ˜ 2 ) q 5 , 1 ( x ˜ 2 ) = 1977 31260325 = v ˜ 2 , 1 , then p 1 ( x ) q 1 ( x ) = p 5 , 1 ( x ) q 5 , 1 ( x ) = 1 2 · x + 1 x 2 + 1 (data used is shown in Table 5).
(3) Let q ( x ) = 2 x 2 + 2 , then we obtain n 2 = 3 , ω 2 ( x ) = ( x 1 ) ( x 2 ) ( x 3 ) , h 2 ( x ) = 1 6 x 2 7 6 x + 1 .
For t = 4 , 5 , , 20 , input the points x t sequentially, calculate u t , 2 , and interpolate c ˜ t , 2 , 2 ( x ) c ˜ t , 2 , 1 ( x ) with Algorithm 2, then compute p t , 2 ( x ) q t , 2 ( x ) . For t = 6 , we have c ˜ 6 , 2 , 2 ( x ) c ˜ 6 , 2 , 1 ( x ) = 1 6 ( x + 1 ) , and then compute p 6 , 2 ( x ) q 6 , 2 ( x ) = x 1 ( x + 1 ) ( x 2 + 1 ) which satisfies p 6 , 2 ( x ˜ 1 ) q 6 , 2 ( x ˜ 1 ) = 1929 114905757655 = v ˜ 1 , 2 and p 6 , 2 ( x ˜ 2 ) q 6 , 2 ( x ˜ 2 ) = 3953 247206650100 = v ˜ 2 , 2 , thus p 2 ( x ) q 2 ( x ) = p 6 , 2 ( x ) q 6 , 2 ( x ) = x 1 ( x + 1 ) ( x 2 + 1 ) (data used are shown in Table 5).
Finally, the rational recovery result is
R ( x ) = x + 1 2 x 2 + 2 , x 1 ( x + 1 ) ( x 2 + 1 ) .
Table 5. Vector-valued rational restoration data.
Table 5. Vector-valued rational restoration data.
x i x 1 = 1 x 2 = 2 x 3 = 3 x 4 = 4 x 5 = 5 x 6 = 6
V ( x i ) 1 2 , 0 3 10 , 1 15 1 5 , 1 20 5 34 , 3 85 3 26 , 1 39 7 74 , 5 259
Example 5. 
Given
v k ( x ) = x k + 1 + 1 ( x k + 1 ) ( x 5 + x + 1 ) , k = 0 , 1 , 2 , , 6 ,
let V s ( x ) = ( v 0 ( x ) , , v s ( x ) ) , for s = 1 , 2 , , 6 , be a set of black box vector-valued rational functions.
We compute the recovery of rational functions using both Algorithm 3 and the Thiele-type algorithm. First, two large prime numbers, x ˜ 1 = 3167 and x ˜ 2 = 3319 , are selected and their corresponding function values are obtained. Additionally, points x i = i for i = 1 , 2 , , 50 , are selected and input into the algorithms. Table 6 lists the number of points and runtime consumed by Algorithm 3 in the 2nd and 4th columns, respectively. The respective information for the Thiele-type algorithm is presented in the 3rd and 5th columns. As the vector dimension increases, the experimental results show that, in this problem, Algorithm 3 exhibits significant advantages over the Thiele-type algorithm in terms of both runtime and the number of points required.
If there are no non-trivial common divisors among the denominators of the components, such as
x + 1 x 2 + 1 , x 3 + x x 4 + 3 , x 2 + 1 , 2 x 6 + 2 x ,
Algorithm 3 can still successfully recover these functions. This fully demonstrates the versatility and effectiveness of Algorithm 3, indicating that it is not only applicable to specific cases where the denominators of vector-valued interpolation functions have non-trivial common divisors, but also widely applicable to general vector-valued rational function recovery problems.
Next, we consider the application of the vector-valued rational recovery algorithm in solving positive-dimensional polynomial systems. Let I be a given positive-dimensional ideal, and suppose that a maximally independent set modulo I is U = ( x 1 , , x d ) . Let V = X U = ( x d + 1 , , x n ) . Denote the extension of I over K ( U ) by I e , then I e K ( U ) [ V ] is a 0 dimensional ideal over K ( U ) [ V ] . According to the reference [17], solving for I e is the most critical step in solving positive-dimensional polynomial systems. Therefore, we focus on discussing the solution process of I e here. Let t be the separating element of the extension ideal I e K ( U ) [ V ] . According to the RUR (Rational Univariate Representation) method for polynomial systems [18], the solutions of I e can be represented by a set of univariate polynomials over K ( U ) in terms of T:
{ χ t ( T ) , G t ( 1 , T ) , G t ( x d + 1 , T ) , , G t ( x n , T ) } .
That is, the solutions of I can be expressed as
χ t ( T ) = 0 , G t ( 1 , T ) 0 , x 1 = x 1 , , x d = x d , x d + 1 = G t ( x d + 1 , T ) G t ( 1 , T ) , , x n = G t ( x n , T ) G t ( 1 , T ) .
The coefficients of these univariate polynomials are rational functions over K ( U ) , which need to be computed. To this end, let R ( U ) be the vector composed of all coefficients of these polynomials:
R ( U ) = p 1 ( U ) q 1 ( U ) , p 2 ( U ) q 2 ( U ) , , p s ( U ) q s ( U ) .
Direct symbolic computations of these rational functions are computationally expensive. To compute these rational functions, we select appropriate points U i K d , and evaluate the generators of I at these points to obtain zero-dimensional ideals I U i . Under certain conditions, t remains to be the separating element of the zero-dimensional ideal I U i . Then, RUR of I U i is calculated as follows:
{ χ t ( T ) | U i , g t , 1 U i ( T ) , g t , x i d + 1 U i ( T ) , , g t , x i n U i ( T ) } .
These are univariate polynomials over K in terms of T. According to reference [17], under appropriate conditions, g t , h U i ( T ) = G t ( h , T ) | U = U i , and if the vector of their coefficients is
V i = ( v i , 1 , , v i , s ) ,
then
v i , j = p j ( U i ) q j ( U i ) , j = 1 , , s .
This is equivalent to obtaining the values of the rational functions p j ( U ) q j ( U ) , j = 1 , 2 , , s , at the points U i . By applying the previous rational function recovery method, we can obtain p j ( U ) q j ( U ) , j = 1 , 2 , , s .
In the following examples, we incorporate the vector-valued rational function recovery method into the algorithm described in [17] in order to obtain solutions to polynomial systems.
Example 6. 
Given
I = x 1 + x 2 + x 3 + x 4 , x 1 x 2 + x 2 x 3 + x 1 x 4 + x 3 x 4 , x 1 x 2 x 3 + x 1 x 2 x 4 + x 1 x 3 x 4 + x 2 x 3 x 4 , x 1 x 2 x 3 x 4 1 ,
where I is a positive-dimensional ideal, and the maximally independent set modulo I is U = { x 1 } , thus V = { x 2 , x 3 , x 4 } . We first arbitrarily select two large prime numbers, U ˜ 1 = 7433 and U ˜ 2 = 8011 , to obtain function values. We then choose points U i = i for i = 1 , 2 , , 10 , and substitute x 1 = U i into the ideal I to obtain I U i . Taking t = x 2 as the separating element, we obtain the RUR of I U i as follows:
χ t U i ( T ) = T 2 + v i , 1 , g t , 1 U i ( T ) = 2 T , g t , x 2 U i ( T ) = v i , 2 , g t , x 3 U i ( T ) = v i , 3 T , g t , x 4 U i ( T ) = v i , 4 ,
and then extract the coefficients to form the vector
V i = v i , 1 , v i , 2 , v i , 3 , v i , 4 ,
which represents the function values at point U i . The relevant data at point U i are then input into the interpolation algorithm, and U ˜ 1 and U ˜ 2 are used for validation; the data for recovery are presented in Table 7.
The recovered result is
V ( x 1 ) = 1 x 1 2 , 2 x 1 2 , 2 x 1 , 2 x 1 2 .
This solution agrees with the one obtained using a different method in reference [19].
Table 7. Coefficient data of RUR of I U i .
Table 7. Coefficient data of RUR of I U i .
i U i V i
13 1 9 , 2 9 , 6 , 2 9
25 1 25 , 2 25 , 10 , 2 25
37 1 49 , 2 49 , 14 , 2 49
49 1 81 , 2 81 , 18 , 2 81
511 1 121 , 2 121 , 22 , 2 121
SHEPWM (Selective Harmonic Elimination Pulse Width Modulation) stands as a vital technology in medium- and high-voltage, high-power inverters, characterized not only by low switching losses but also by its ability to achieve linear control of the fundamental voltage to optimize the quality of the output voltage. Harmonic elimination theory allows us to derive a set of trigonometric algebraic equations with N switching angles, known as the SHEPWM equations. In 2004, Chiasson et al. [20] were the first to attempt transforming such trigonometric algebraic equations into a polynomial system for solution. In 2017, Shang et al. [21] further simplified these polynomial systems and proposed a method for solving them with the rational representation theory of positive-dimensional ideals.
Example 7. 
The SHEPWM problem with N = 3 can be solved by finding the zeros of the ideal
I : = 1 5 s 1 5 s 1 3 s 2 s 1 3 + s 1 2 s 3 + s 1 s 2 2 + 3 s 1 s 2 s 2 s 3 + s 1 3 s 3 , 1 7 s 1 7 s 1 5 s 2 s 1 5 + s 1 4 s 3 + 2 s 1 3 s 2 2 + 5 s 1 3 s 2 3 s 1 2 s 2 s 3 s 1 s 2 3 + 2 s 1 3 5 s 1 2 s 3 5 s 1 s 2 2 + s 1 s 3 2 + s 2 2 s 3 6 s 1 s 2 + 5 s 2 s 3 s 1 + 6 s 3 ,
where I is a positive-dimensional ideal, and the maximally independent set modulo I is U = { s 1 } and V = { s 2 , s 3 } . We first select two large prime numbers, U ˜ 1 = 7001 and U ˜ 2 = 8009 , to obtain function values. Choosing U i = i for i = 1 , , 30 , and taking t = s 2 as the separating element, we obtain the RUR of I U i as follows:
{ χ t U i ( T ) = T 3 + v i , 1 T 2 + v i , 2 T + v i , 3 , g t , 1 U i ( T ) = 3 T 2 + v i , 4 T + v i , 5 , g t , s 2 U i ( T ) = v i , 6 T 2 + v i , 7 T + v i , 8 , g t , s 3 U i ( T ) = v i , 9 T 2 + v i , 10 T + v i , 11 } ,
all the coefficients form the vector
V i = v i , 1 , v i , 2 , v i , 3 , v i , 4 , v i , 5 , v i , 6 , v i , 7 , v i , 8 , v i , 9 , v i , 10 , v i , 11 ,
which represents the function values at point U i . The relevant data at point U i are then input into the interpolation algorithm, and U ˜ 1 and U ˜ 2 are used for validation. The data used for recovery are provided in Appendix A. The recovered result is
V ( U ) = ( 9 s 1 6 91 s 1 4 + 280 s 1 2 245 7 ( s 1 4 5 s 1 2 + 5 ) , ( s 1 3 3 ) ( 4 s 1 6 49 s 1 4 + 175 s 1 2 175 ) 7 ( s 1 4 5 s 1 2 + 5 ) , ( 3 s 1 10 60 s 1 8 + 440 s 1 6 1505 s 1 4 + 2450 s 1 2 1575 ) 35 ( s 1 4 5 s 1 2 + 5 ) , 2 ( 9 s 1 6 91 s 1 4 + 280 s 1 2 245 ) 7 ( s 1 4 5 s 1 2 + 5 ) , 4 s 1 8 61 s 1 6 + 322 s 1 4 700 s 1 2 + 525 7 ( s 1 4 5 s 1 2 + 5 ) , 9 s 1 6 91 s 1 4 + 280 s 1 2 245 7 ( s 1 4 5 s 1 2 + 5 ) , 2 ( 4 s 1 8 61 s 1 6 + 322 s 1 4 700 s 1 2 + 525 ) 7 ( s 1 4 5 s 1 2 + 5 ) , 9 s 1 10 180 s 1 8 + 1320 s 1 6 4515 s 1 4 + 7350 s 1 2 4725 35 ( s 1 4 5 s 1 2 + 5 ) , s 1 ( 2 s 1 6 35 s 1 4 + 140 s 1 2 140 ) 7 ( s 1 4 5 s 1 2 + 5 ) , 3 s 1 ( 3 s 1 8 65 s 1 6 + 420 s 1 4 1050 s 1 2 + 875 ) 35 ( s 1 4 5 s 1 2 + 5 ) , s 1 ( 2 s 1 10 51 s 1 8 + 440 s 1 6 1715 s 1 4 + 3150 s 1 2 2275 ) 35 ( s 1 4 5 s 1 2 + 5 ) ) .
The solution obtained from this result agrees with the one found using a different method in reference [21].

4. Conclusions

This paper deals with the problems of univariate vector-valued rational interpolation and recovery with common divisors in the denominators. We propose a univariate vector-valued rational interpolation algorithm that takes the advantage of the property of a common divisor.Specifically, it leverages the denominators obtained from prior interpolated components to reduce the degree of interpolation for subsequent ones. By incorporating termination conditions into this algorithm, a vector-valued rational recovery algorithm can be derived. Numerical experiments demonstrate reliable performance of the algorithm in interpolation and recovery, offering advantages over traditional univariate Thiele-type algorithms. Additionally, we explore its application in solving positive-dimensional polynomial systems, confirming its feasibility in practical scenarios. However, when we are confronted with bivariate or multivariate vector-valued rational interpolation problems, the construction of the algorithm encounters greater challenges due to the complexity of interpolation points and function definitions. Future work aims to further investigate solutions for bivariate vector-valued rational interpolation and recovery, specifically targeting these more complex scenarios. In our research, we find in [22] that interpolation calculations can be applied to the solution of differential equations [23]. Consequently, we plan to use our method to solve differential equations in the future. We discover that in [24], the idea of mapping is utilized in the process of solving the problem, and the method we propose in this paper for solving positive-dimensional ideal problems actually establishes a homomorphic image mapping between solutions in zero-dimensional space and positive-dimensional space. This sparks our interest in using mappings to solve problems. Therefore, our next step is to explore the application of this mapping concept in solving different types of problems.

Author Contributions

Conceptualization, P.X. and L.X.; methodology, P.X. and L.X.; software, L.X.; validation, L.X.; formal analysis, L.X.; writing—original draft preparation, L.X.; writing—review and editing, S.Z. and P.X.; funding acquisition, P.X. and L.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Scientific Research Fund of Liaoning Provincial Education Department (No. LZD202003) and the Inner Mongolia Natural Science Foundation (No. 2022MS01015) and the Doctoral Research Launch Fund of Inner Mongolia Minzu University of China (No. BS650).

Data Availability Statement

The original contributions presented in the study are included in the article; further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Coefficient Data of RUR of I U i

The appendix provides supplementary coefficient data related to the RUR of I U i for Example 7.
Table A1. Coefficient data of RUR of I U i .
Table A1. Coefficient data of RUR of I U i .
i U i V i
11 47 7 , 18 7 , 247 35 , 94 7 , 90 7 , 47 7 , 180 7 , 741 35 , 33 7 , 549 35 , 449 35
22 5 7 , 3 7 , 17 35 , 10 7 , 3 7 , 5 7 , 6 7 , 51 35 , 24 7 , 18 35 , 74 35
33 1465 287 , 8328 1435 , 2817 1435 , 2930 287 , 2082 287 , 1465 287 , 4164 287 , 8451 1435 , 771 287 , 20313 1435 , 25779 1435
44 17803 1267 , 78873 1267 , 668153 6335 , 35606 1267 , 84045 1267 , 17803 1267 , 168090 1267 , 2004459 6335 , 5328 1267 , 263556 6335 , 664564 6335
55 18101 707 , 176046 707 , 474137 707 , 36202 707 158730 707 , 18101 707 , 317460 707 , 1422411 707 , 12735 707 , 236025 707 , 1097795 707
66 311803 7847 , 5505837 7847 , 99286353 39235 , 623606 7847 , 4265085 7847 , 311803 7847 , 8530170 7847 297859059 39235 , 317112 7847 , 45244854 39235 , 322134366 39235
77 121975 2161 , 3510228 2161 , 78544031 10805 , 243950 2161 , 2374566 2161 , 121975 2161 , 4749132 2161 , 235632093 19805 , 157983 2161 , 31815189 10805 , 318745567 10805
88 2004235 26476 , 437178573 132335 , 2323926617 132335 , 4008470 26467 , 52392717 26467 , 2004235 26467 , 104785434 26467 , 6971779851 132335 , 3117984 26467 , 838710792 132335 , 11202909416 132335
99 4208353 43127 , 264013530 43127 , 8101706553 215635 , 8416706 43127 , 141825450 43127 , 4208353 43127 283650900 43127 , 24305119659 215635 , 7600023 43127 , 2626234461 215635 , 45009187479 215635
1010 1623551 13307 , 140669721 13307 , 977007737 13307 , 3247102 13307 , 68430105 13307 , 1623551 13307 136860210 13307 , 2931023211 13307 , 3327720 13307 , 286915050 13307 , 6129265090 13307
1111 ( 14645353 98287 , 1697140176 98287 , 65708487953 491435 , 29290706 98287 , 754000530 98287 , 14645353 98287 , 1508001060 98287 , 197125463859 491435 , 33522357 98287 , 17620291359 491435 ,
458670329381 491435 )
1212 ( 25026955 14017 , 3778749465 140147 , 161236167417 700735 , 50053910 140147 , 1544358477 140147 , 25026955 140147 , 3088716954 140147 , 483708502251 700735 , 63194736 140147 , 39758937228 700735 ,
1238212607964 700735 )
1313 ( 40889305 194047 , 39354100518 970235 , 366712856417 970235 , 81778610 194047 , 2977566402 194047 , 49889305 194047 , 5955132804 194047 , 1100138569251 970235 , 112807539 194047 , 83665476297 970235 ,
3070470791011 970235 )
1414 ( 9189229 37441 , 2213935851 37441 , 111781754279 187205 , 18378458 37441 , 779441115 37441 , 9189229 37441 1558882230 37441 , 335345262837 187205 , 27483624 37441 , 23723262738 187205 ,
1012983591242 187205 )
1515 ( 19594301 69307 , 5816170596 69307 , 63244558737 69307 , 39188602 69307 , 1914575730 69307 , 19594301 69307 , 3829151460 69307 , 189733676211 69307 , 63122205 69307 , 12544755075 69307 ,
616511471385 69307 )
1616 ( 145102603 449827 , 52343105385 449827 , 3048120814553 2249135 , 290205206 449827 , 16177382925 449827 , 145102603 449827 , 32354765850 449827 , 9144362443659 2249135 , 500741952 449827 ,
567438722064 2249135 , 31796004934096 2249135 )
1717 ( 209718385 574567 , 90843000354 574567 , 5639931792617 2872835 , 419436770 574567 , 26457330042 574567 , 209718385 574567 , 52914660084 574567 , 16919795377851 2872835 , 771667791 574567
989048435733 2872835 , 62673940312559 2872835 )
1818 ( 296647675 723527 , 763375148913 3617635 , 10065012227217 3617635 , 593295350 723527 , 42038672637 723527 , 296647675 723527 , 84077345274 723527 , 30195036681651 3617635 , 1159119144 723527 ,
1668213239202 3617635 , 118686650884506 3617635 )

References

  1. Graves-Morris, P.R.; Beckermann, B. The compass (star) identity for vector-valued rational interpolants. Adv. Comput. Math. 1997, 7, 279–294. [Google Scholar] [CrossRef]
  2. Wu, B.; Li, Z.; Li, S. The implementation of a vector-valued rational approximate method in structural reanalysis problems. Comput. Methods Appl. Mech. Eng. 2003, 192, 1773–1784. [Google Scholar] [CrossRef]
  3. Tsekeridou, S.; Cheikh, F.A.; Gabbouj, M.; Pitas, I. Vector rational interpolation schemes for erroneous motion field estimation applied to MPEG-2 error concealment. IEEE Trans. Multimed. 2004, 6, 876–885. [Google Scholar] [CrossRef]
  4. Hu, G.; Qin, X.; Ji, X.; Wei, G.; Zhang, S. The construction of λμ-B-spline curves and its application to rotational surfaces. Appl. Math. Comput. 2015, 266, 194–211. [Google Scholar] [CrossRef]
  5. He, L.; Tan, J.; Huo, X.; Xie, C. A novel super-resolution image and video reconstruction approach based on Newton-Thiele’s rational kernel in sparse principal component analysis. Multimed. Tools Appl. 2017, 76, 9463–9483. [Google Scholar] [CrossRef]
  6. Graves-Morris, P.R. Vector valued rational interpolants I. Numer. Math. 1983, 42, 331–348. [Google Scholar] [CrossRef]
  7. Graves-Morris, P.R. Vector-valued rational interpolants II. IMA J. Numer. Anal. 1984, 4, 209–224. [Google Scholar] [CrossRef]
  8. Graves-Morris, P.R.; Jenkins, C.D. Vector-valued rational interpolants III. Constr. Approx. 1986, 2, 263–289. [Google Scholar] [CrossRef]
  9. Levrie, P.; Bultheel, A. A note on thiele n-fractions. Numer. Algor. 1993, 4, 225–239. [Google Scholar] [CrossRef]
  10. Zhu, X.; Zhu, G. A recurrence algorithm for vector valued rational interpolation. J. Univ. Sci. Technol. China 2003, 33, 15–25. [Google Scholar] [CrossRef]
  11. Wang, R.; Zhu, G. Rational Function Approximation and Its Applications; Science Press: Beijing, China, 2004; pp. 117–146. [Google Scholar]
  12. Fitzpatrick, P. On the scalar rational interpolation problem. Math. Control Signal. Systems 1996, 9, 352–369. [Google Scholar] [CrossRef]
  13. Jones, W.B.; Thron, W.J. Continued Fractions: Analytic Theory and Applications; Addison-Wesley Pub. Co.: Glenview, IL, USA, 1980. [Google Scholar]
  14. Kailath, T.; Kung, S.Y.; Morf, M. Displacement ranks of matrices and linear equations. J. Math. Anal. Appl. 1979, 68, 395–407. [Google Scholar] [CrossRef]
  15. Löwner, K. Über monotone matrixfunktionen. Math. Z. 1934, 38, 177–216. [Google Scholar] [CrossRef]
  16. Gohberg, I.; Kailath, T.; Olshevsky, V. Fast gaussian elimination with partial pivoting for matrices with displacement structure. Math. Comput. 1995, 64, 1557–1576. [Google Scholar] [CrossRef]
  17. Tan, C.; Zhang, S. Computation of the rational representation for solutions of high-dimensional systems. Commun. Math. Res. 2010, 26, 119–130. [Google Scholar] [CrossRef]
  18. Rouillier, F. Solving zero-dimensional systems through the rational univariate representation. Appl. Algebra Eng. Commun. Comput. 1999, 9, 433–461. [Google Scholar] [CrossRef]
  19. Faugére, J.C. A new efficient algorithm for computing Gröbner basis (F4). J. Pure Appl. Algebra 1999, 139, 61–88. [Google Scholar] [CrossRef]
  20. Chiasson, J.N.; Tolbert, L.M.; Mckenzie, K.J.; Du, Z. A complete solution to the harmonic elimination problem. IEEE Trans. Power Electron. 2004, 19, 491–499. [Google Scholar] [CrossRef]
  21. Shang, B.; Zhang, S.; Tan, C.; Xia, P. A simplified rational representation for positive-dimensional polynomial systems and SHEPWM equations solving. J. Syst. Sci. Complex. 2017, 30, 1470–1482. [Google Scholar] [CrossRef]
  22. Djellab, N.; Boureghda, A. A moving boundary model for oxygen diffusion in a sick cell. Comput. Methods Biomech. Biomed. Eng. 2022, 25, 1402–1408. [Google Scholar] [CrossRef]
  23. Abbaszadeh, M.; Dehghan, M. A meshless numerical procedure for solving fractional reaction subdiffusion model via a new combination of alternating direction implicit (ADI) approach and interpolating element free Galerkin (EFG) method. Comput. Math. Appl. 2015, 70, 2493–2512. [Google Scholar] [CrossRef]
  24. Boureghda, A. Solution of an ice melting problem using a fixed domain method with a moving boundary. Bull. Math. Soc. Sci. Math. Roumanie 2019, 62, 341–353. [Google Scholar]
Table 6. Comparison of points and runtime between the two algorithms.
Table 6. Comparison of points and runtime between the two algorithms.
FunctionNumber of PointsRuntime (s)
Algorithm 3 Thiele Algorithm 3 Thiele
V 1 ( x ) 11130.0150.063
V 2 ( x ) 12170.0160.219
V 3 ( x ) 15210.0160.797
V 4 ( x ) 18290.0165.453
V 5 ( x ) 21370.03228.125
V 6 ( x ) 24450.062115.907
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xiao, L.; Xia, P.; Zhang, S. On the Univariate Vector-Valued Rational Interpolation and Recovery Problems. Mathematics 2024, 12, 2896. https://doi.org/10.3390/math12182896

AMA Style

Xiao L, Xia P, Zhang S. On the Univariate Vector-Valued Rational Interpolation and Recovery Problems. Mathematics. 2024; 12(18):2896. https://doi.org/10.3390/math12182896

Chicago/Turabian Style

Xiao, Lixia, Peng Xia, and Shugong Zhang. 2024. "On the Univariate Vector-Valued Rational Interpolation and Recovery Problems" Mathematics 12, no. 18: 2896. https://doi.org/10.3390/math12182896

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop