Next Article in Journal
The Modified Helmholtz Equation on a Regular Hexagon—The Symmetric Dirichlet Problem
Next Article in Special Issue
Modified Viscosity Subgradient Extragradient-Like Algorithms for Solving Monotone Variational Inequalities Problems
Previous Article in Journal
Commutative Topological Semigroups Embedded into Topological Abelian Groups
Previous Article in Special Issue
A Discussion on the Existence of Best Proximity Points That Belong to the Zero Set
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Reconstruction of Piecewise Smooth Multivariate Functions from Fourier Data

School of Mathematical Sciences, Tel-Aviv University, Tel Aviv 6997801, Israel
Axioms 2020, 9(3), 88; https://doi.org/10.3390/axioms9030088
Submission received: 24 June 2020 / Revised: 21 July 2020 / Accepted: 22 July 2020 / Published: 24 July 2020
(This article belongs to the Special Issue Nonlinear Analysis and Optimization with Applications)

Abstract

:
In some applications, one is interested in reconstructing a function f from its Fourier series coefficients. The problem is that the Fourier series is slowly convergent if the function is non-periodic, or is non-smooth. In this paper, we suggest a method for deriving high order approximation to f using a Padé-like method. Namely, we do this by fitting some Fourier coefficients of the approximant to the given Fourier coefficients of f. Given the Fourier series coefficients of a function on a rectangular domain in R d , assuming the function is piecewise smooth, we approximate the function by piecewise high order spline functions. First, the singularity structure of the function is identified. For example in the 2D case, we find high accuracy approximation to the curves separating between smooth segments of f. Secondly, simultaneously we find the approximations of all the different segments of f. We start by developing and demonstrating a high accuracy algorithm for the 1D case, and we use this algorithm to step up to the multidimensional case.

1. Introduction

Fourier series expansion is a useful tool for representing and approximating functions, with applications in many areas of applied mathematics. The quality of the approximation depends on the smoothness of the approximated function and on whether or not it is periodic. For functions that are not periodic, the convergence rate is slow near the boundaries and the approximation by partial sums exhibits the Gibbs phenomenon. Several approaches have been used to improve the convergence rate, mostly for the one-dimensional case. One approach is to filter out the oscillations, as discussed in several papers [1,2]. Another useful approach is to transform the Fourier series into an expansion in a different basis. For the univariate case this approach is shown to be very efficient, as shown in [1] using Gegenbauer polynomials with suitably chosen parameters. Further improvement of this approach is presented in [3] using Freud polynomials, achieving very good results for univariate functions with singularities.
An algebraic approach for reconstructing a piecewise smooth univariate function from its first N Fourier coefficients has been realized by Eckhoff in a series of papers [4,5,6]. There, the “jumps” are determined by a corresponding system of linear equations. A full analysis of this approach is presented by Betankov [7]. Nersessian and Poghosyan [8] have used a rational Padé type approximation strategy for approximating univariate non-periodic smooth functions. For multiple Fourier series of smooth non-periodic functions, a convergence acceleration approach was suggested by Levin and Sidi [9]. More challenging is the case of multivariate functions with discontinuities, i.e., functions that are piecewise smooth. Here again, the convergence rate is slow, and near the discontinuities, the approximation exhibits the Gibbs phenomenon. In this paper, we present a Padé-like approach consisting of finding a piecewise-defined spline whose Fourier coefficients match the given Fourier coefficients.
The main contribution of this paper is demonstrating that this approach can be successfully applied to the multivariate case. Namely, we present a strategy for approximating both non-periodic and non-smooth multivariate functions. We derive the numerical procedures involved and provide some interesting numerical results. We start by developing and demonstrating a high accuracy algorithm for the 1D case, and use this algorithm to step up to the multidimensional case.

2. The 1D Case

In this section, we present the main tools for function approximation using its Fourier series coefficients. We define the basis functions and describe the fitting strategy and develop the computation algorithm. After dealing with the smooth case we move on to approximate a piecewise smooth function with a jump singularity.

2.1. Reconstructing Smooth Non-Periodic Functions

Let f C m [ 0 , 1 ] , and assume we know the Fourier series expansion of f
f ( x ) = n Z f ^ n e 2 π i n x .
The series converge pointwise for any x [ 0 , 1 ] , however, if f is not periodic, the convergence may be slow, and if f ( 1 ) f ( 0 ) the convergence is not uniform and the Gibbs phenomenon occurs near 0 and near 1. As discussed in [9,10], one can apply convergence acceleration techniques for improving the convergence rate of the series. Another convergence acceleration approach was suggested by Gottlieb and Shu [1] using Gegenbauer polynomials. Yet, in both approaches, the convergence rate is not much improved near 0 and near 1. We suggest an approach in the spirit of Padé approximation. A Padé approximant is a rational function whose power series agrees as much as possible with the given power series of f. Here we look for approximations to f whose Fourier coefficients agree with a subset of the given Fourier coefficients of f. The approximation space can be any favorable linear approximation space, such as polynomials or trigonometric functions.
We choose to build the approximation using kth order spline functions, represented in the B-spline basis:
S d [ k ] ( x ) = j = 1 N d a j B d [ k ] ( x j d ) .
B d [ k ] ( x ) is the B-spline of order k with equidistant knots { k d , , 2 d , d , 0 } , and N d = 1 / d + k 1 is the number of B-splines whose shifts do not vanish in [ 0 , 1 ] . The advantage of using spline functions is threefold:
  • The locality of the B-spline basis functions.
  • A closed form formula for their Fourier coefficients.
  • Their approximation power, i.e., if f C k [ 0 , 1 ] , there exists a spline S d [ k ] such that f S d [ k ] , [ 0 , 1 ] C d k .
The B-splines basis functions used in the 1D case are shown in Figure 1. We denote by S S d [ k ] | [ 0 , 1 ] the restriction of S d [ k ] to the interval [ 0 , 1 ] . We find the coefficients { a i } i = 1 N d by least-squares fitting, matching the first M + 1 Fourier coefficients of S to the corresponding M + 1 Fourier coefficients of f. That is,
{ a i } i = 1 N d = arg min n = 0 M | f ^ n S ^ n | 2 .
Notice that it is enough to consider the Fourier coefficients with non-negative indices.
We denote by B i B d [ k ] ( · i d ) | [ 0 , 1 ] the restriction of B d [ k ] ( · i d ) to the interval [ 0 , 1 ] , and by { B ^ i , n } its Fourier coefficients. The normal equations for the least squares problem (3) induce the linear system A a = b for a = { a i } i = 1 N d , where
A i , j = n = 0 M [ Re ( B ^ i , n ) Re ( B ^ j , n ) + Im ( B ^ i , n ) Im ( B ^ j , n ) ] , 1 i , j N d ,
and
b i = n = 0 M [ Re ( B ^ i , n ) Re ( f ^ n ) + Im ( B ^ i , n ) Im ( f ^ n ) ] , 1 i N d .

Numerical Example—The Smooth 1D Case

We consider the test function f ( x ) = x exp ( x ) + sin ( 8 x ) , assuming only its Fourier coefficients are given. We have used only the 20 Fourier coefficients { f ^ n } n = 0 19 , and computed an approximation using 12th degree splines with equidistant knots’ distance d = 0.1 . For this case, the matrix A is of size 19 × 19 , and c o n d ( A ) = 5.75 × 10 20 . We have employed an iterative refinement algorithm described below to obtain a high precision solution. The results are shown in the following two figures. In Figure 2 we see the test function on the left and the approximation error on the right. Figure 3 presents the graph of log 10 ( f ^ n ) in blue and the graph of log 10 ( f ^ n S ^ n ) , showing eight orders of magnitude reduction in the Fourier coefficients. Notice the matching in the first Fourier coefficients reflected in the beginning of the red graph.
Remark 1.
The powerful iterative refinement method described in [11,12] is as follows:
For solving a system A x = b , we use some solver, e.g., the Matlab pinv function. We obtain the solution x ( 0 ) = p i n v ( A ) b . Next we compute the residual r ( 0 ) = b A x ( 0 ) . In case c o n d ( A ) is very large, the residual will be large. Now we solve again the system with r ( 0 ) at the right hand side, and use the solution to correct x ( 0 ) , to obtain
x ( 1 ) = x ( 0 ) + p i n v ( A ) r ( 0 ) .
We repeat this correction steps a few times, i.e., r ( k ) = b A x ( k ) , and
x ( k + 1 ) = x ( k ) + p i n v ( A ) r ( k ) ,
until the resulting residual r ( k ) is small enough.

2.2. Reconstructing Non-Smooth Univariate Functions

Let f be a piecewise smooth function on [ 0 , 1 ] , defined by combined two pieces f 1 C m [ 0 , s * ] and f 2 C m ( s * , 1 ] , and assume that f 2 can be continuously extended to [ s * , 1 ] .
f ( x ) = f 1 ( x ) x s * , f 2 ( x ) x < s * .
Here again, we assume that all we know about f is its Fourier series expansion. In particular, we do not know the position s * [ 0 , 1 ] of the singularity of f. As in the case of a non-periodic function, the existence of a singularity in [ 0 , 1 ] significantly influences the Fourier series coefficients and implies their slow decay. As we demonstrate below, good matching of the Fourier coefficients requires a good approximation of the singularity location. The approach we suggest here involves finding approximations to f 1 and f 2 simultaneously with a high precision identification of s * .
Let s be an approximation of the singularity location s * , and let us follow the algorithm suggested above for the smooth case. The difference here is that now we look for two separate spline approximations:
S 1 S d [ k ] | [ 0 , s ] ( x ) = i = 1 N d a 1 i B d [ k ] ( x i d ) | [ 0 , s ] f 1 ,
and
S 2 S d [ k ] | ( s , 1 ] ( x ) = i = 1 N d a 2 i B d [ k ] ( x i d ) | ( s , 1 ] f 2 .
The combination S of S 1 and S 2 constitutes the approximation to f. Here again we aim at matching the first M + 1 Fourier coefficients of f and of S. Here S depends on the N d coefficients { a 1 i } of S 1 , the N d coefficients { a 2 i } of S 2 and on s. Therefore, the minimization process solves for all these unknowns:
{ a 1 i } i = 1 N d , { a 2 i } i = 1 N d , s = arg min n = 0 M | f ^ n S ^ n | 2 .
The minimization is non-linear with respect to s, and linear with respect to the other unknowns. Therefore, the minimization problem is actually a one parameter non-linear minimization problem, the parameter s. Using the approximation power of kth order splines ( k m ), and considering the value of the objective cost function for s = s * , we can deduce that the minimal value of n = 0 M | f ^ n S ^ n | 2 is O ( d 2 k ) . We also observe that an ϵ deviation from s * implies a bounded deviation of the minimizing Fourier coefficients
max n Z | f ^ n S ^ n | c 1 ϵ + c 2 d k .
As shown below, these observations can be used for finding a good approximation to s * .
We denote by B 1 i B d [ k ] ( · i d ) | [ 0 , s ] the restriction of B d [ k ] ( · i d ) to the interval [ 0 , s ] , and by B 2 i B d [ k ] ( · i d ) | ( s , 1 ] the restriction of B d [ k ] ( · i d ) to the interval ( s , 1 ] . We concatenate these two sequences of basis functions, { B 1 i } and { B 2 i } into one sequence { B i } i = 1 2 N d , and denote their Fourier coefficients by { B ^ i , n } n Z . For a given s, the normal equations for the least squares problem (9) induce the linear system A a = b for the splines’ coefficients a = ( { a 1 i } i = 1 N d , { a 2 i } i = 1 N d ) , where:
A i , j = n = 0 M [ Re ( B ^ i , n ) Re ( B ^ j , n ) + Im ( B ^ i , n ) Im ( B ^ j , n ) ] , 1 i , j 2 N d ,
and
b i = n = 0 M [ Re ( B ^ i , n ) Re ( f ^ n ) + Im ( B ^ i , n ) Im ( f ^ n ) ] , 1 i 2 N d .
Remark 2.
Due to the locality of the B-splines, some of the basis functions { B 1 i } and { B 2 i } may be identical 0. It thus seems better to use only the non-zero basis functions. From our experience, since we use the generalized inverse approach for solving the system of equations, using all the basis functions gives the same solution.
The generalized inverse approach computes the least-squares solution to a system of linear equations that lacks a unique solution. It is also called the Moore–Penrose inverse, and is computed by Matlab pinv function.
The above construction can be carried out to the case of several singular points.

2.2.1. Finding s *

We present the strategy for finding s * together with a specific numerical example. We consider a test function on [ 0 , 1 ] with a jump discontinuity at s * = 0.5 :
f ( x ) = f 1 ( x ) = sin ( 5 x ) x s * , f 2 ( x ) = 1 ( x 0.5 ) 2 + 0.5 x < s * .
As expected, the Fourier series of f is slowly convergent, and it exhibits the Gibbs phenomenon near the ends of [ 0 , 1 ] and near s * . In Figure 4, on the left, we present the sum of the first 200 terms of the Fourier series, computed at 20,000 points in [ 0 , 1 ] . This sum is not acceptable as an approximation to f, and yet we can use it to obtain a good initial approximation to s 0 s * . On the right graph, we plot the first differences of the values in the left graph. The maximal difference is achieved at a distance of order 10 4 from s * .
Having a good approximation s 0 s * is not enough for achieving a good approximation to f. However, s 0 can be used as a starting point for an iterative method leading to a high precision approximation to s * . To support this assertion we present the graph in Figure 5, depicting the maximum norm of the difference between 1000 of the given Fourier coefficients and the corresponding Fourier coefficients of the approximation S, as a function of s, near s * = 0.5 . This function is almost linear on each side of s * , and simple quasi-Newton iterations converge very fast to s * . After obtaining a high accuracy approximation to s * , we use it for deriving the piecewise spline approximation to f.
In the following, we present the numerical results obtained for the test function defined in (13). We have used only 20 Fourier coefficients of f, and the two approximating functions S 1 and S 2 are splines of order eight, with knots’ distance d = 0.1 . Figure 6 depicts the approximation error, showing that f S = 5.3 × 10 8 , and that the Gibbs phenomenon is completely removed. Figure 7 shows log 10 of the absolute values of the given Fourier coefficients of f (in blue), and the corresponding values for the Fourier coefficients of f S (in red). The graph shows a reduction of ∼7 orders of magnitude. These results clearly demonstrate the high effectiveness of the proposed approach.

2.2.2. The 1D Approximation Procedure

Let us sum up the suggested approximation procedure:
(1)
Choose the approximation space Π for approximating f 1 and f 2 .
(2)
Define the number of Fourier coefficients to be used for building the approximation such that
M + 1 2 dim ( Π ) .
(3)
Find first approximation to s * : Compute a partial Fourier sum and locate maximal first order difference.
(4)
Calculate the first M + 1 Fourier coefficients of the basis functions of Π , truncated at s * .
(5)
Use the above Fourier coefficients to compute the approximation to f 1 and f 2 by solving the system of linear equation defined by (11), (12).
(6)
Update the approximation to s * , by performing quasi-Newton iterations to reduce the objective function in (9).
(7)
Go back to (4) to update the approximation.

3. The 2D Case—Non-Periodic and Non-Smooth

3.1. The Smooth 2D Case

Let f C m [ 0 , 1 ] 2 , and assume we know its Fourier series expansion
f ( x , y ) = m Z n Z f ^ m n e 2 π i m x e 2 π i n y .
Such series are obtained when solving PDE using spectral methods. However, if the function is not periodic, or, as in the case of hyperbolic equations, the function has a jump discontinuity along some curve in [ 0 , 1 ] 2 , the convergence of the Fourier series is slow. Furthermore, the approximation of f by its partial sums suffers from the Gibbs phenomenon near the boundaries and near the singularity curve.
We deal with the case of smooth non-periodic 2D functions in the same manner as we did for the univariate case. We look for a bivariate spline function S whose Fourier coefficients match the Fourier coefficients of f. As in the univariate case, it is enough to match the coefficients of low frequency terms in the Fourier series. The technical difference in the 2D case is that we look for a tensor product spline approximation, using tensor product kth order B-spline basis functions.
S d [ k ] ( x , y ) = i = 1 N d j = 1 N d a i j B d [ k ] ( x i d ) B d [ k ] ( y j d ) .
The system of equations for the B-spline coefficients is the same as the system defined by (4) and (5) in the univariate case, only here we reshape the unknowns as a vector of N d 2 unknowns.

Numerical Example—The Smooth 2D Case

We consider the test function
f ( x , y ) = 10 1 + 10 ( x 2 + ( y 1 ) 2 ) + sin ( 10 ( x y ) ) ,
assuming only its Fourier coefficients are given. We have used only 160 Fourier coefficients, and constructed an approximation using 10th degree tensor product splines with equidistant knots’ distance d = 0.1 in each direction. For this case, the matrix A is of size 361 × 361 , and c o n d ( A ) = 6.2 × 10 30 . Again, we have employed the iterative refinement algorithm to obtain a high precision solution (relative error 10 15 ). Computation time ∼18 s.
In Figure 8 we plot the test function on [ 0 , 1 ] 2 . Note that it has high derivatives near ( 0 , 1 ) .
The approximation error f S 0.1 [ 10 ] is shown in Figure 9. To demonstrate the convergence acceleration of the Fourier series achieved by subtracting the approximation from f, we present in Figure 10 log 10 of the absolute values of the Fourier coefficients of f (in green) and of the Fourier coefficients of f S 0.1 [ 10 ] (in blue), for frequencies 0 m , n 200 . The magnitude of the Fourier coefficients is reduced by a factor of 10 5 , and even more so for the low frequencies due to the matching strategy used to derive the spline approximation.

3.2. The Non-Smooth 2D Case

Let Ω 1 , Ω 2 [ 0 , 1 ] 2 be open, simply connected domains with the properties
Ω 1 Ω 2 = , Ω ¯ 1 Ω ¯ 2 = [ 0 , 1 ] 2 .
Let Γ * be the curve separating the two domains,
Γ * = Ω ¯ 1 Ω ¯ ,
and assume Γ * is a C m -smooth curve.
Let f be a piecewise smooth function on [ 0 , 1 ] 2 , defined by combined two pieces f 1 C m [ Ω 1 ] and f 2 C m [ Ω 2 ] , and assume that each f j can be continuously extended to a function in C m ( Ω ¯ j ) , j = 1 , 2 . Here again, we assume that all we know about f is its Fourier expansion. In particular, we do not know the position of the dividing curve separating Ω 1 and Ω 2 . We denote this curve by Γ * , and we assume that it is a C m -smooth curve. As in the case of a non-periodic function, the existence of a singularity curve in [ 0 , 1 ] 2 significantly influences the Fourier series coefficients and implies their slow decay. In case of discontinuity of f across Γ * , partial sums of the Fourier series exhibit the Gibbs phenomenon near Γ * . As demonstrated below, a good matching of the Fourier coefficients requires a good approximation of the singularity location. As in the univariate non-smooth case, the computation algorithm involves finding approximations to f 1 and f 2 simultaneously with a high precision identification of Γ * .
Evidently, finding a high precision approximation of the singularity curve Γ * is more involved than finding a high precision approximation to the singularity point s * in the univariate case. Let D Γ * ( x , y ) be the signed-distance function corresponding to the curve Γ * :
D Γ * ( x , y ) = d i s t ( ( x , y ) , Γ * ) ( x , y ) Ω 1 , d i s t ( ( x , y ) , Γ * ) ( x , y ) Ω 2 .
In looking for an approximation to Γ * , we look for an approximation to D Γ * . Here again we are using a tensor product spline approximants, the same set of spline functions described in the previous section. Since the curve is C m , it can be shown that one can construct a spline function D ˜ of order k m , with knots’ distance h, which approximates D Γ * near Γ * so that the Hausdorff distance between the zero level set of D ˜ and Γ * is O ( h k ) .
Let D b ¯ be a spline approximation to D Γ * , with spline coefficients b ¯ = { b i j } i , j = 1 N h :
D b ¯ ( x , y ) = i = 1 N h j = 1 N h b i j B h [ k ] ( x i h ) B h [ k ] ( y j h ) .
For a given D b ¯ we define the approximation to f similar to the construction in the univariate case by Equations (7)–(9). We look here for an approximation S to f which is a combination of two bivariate splines components:
S ( x , y ) = i = 1 N d j = 1 N d a 1 i j B d [ k ] ( x i d ) B d [ k ] ( y j d ) , D b ¯ ( x , y ) 0 ,
S ( x , y ) = i = 1 N d j = 1 N d a 2 i j B d [ k ] ( x i d ) B d [ k ] ( y j d ) , D b ¯ ( x , y ) < 0 ,
such that ( 2 M + 1 ) 2 Fourier coefficients of f and S are matched in the least-squares sense:
{ a 1 i j } i , j = 1 N d , { a 2 i j } i , j = 1 N d , { b i j } i , j = 1 N d = arg min m , n = M M | f ^ m n S ^ m n | 2 .
We denote by B 1 i j ( x , y ) the restriction of B d [ k ] ( x i d ) B [ k ] ( y j d ) to the domain defined by D b ¯ ( x , y ) 0 , and by B 2 i j ( x , y ) the restriction of B d [ k ] ( x i d ) B [ k ] ( y j d ) to the domain defined by D b ¯ ( x , y ) < 0 . We concatenate these two sequences of basis functions, { B 1 i j } and { B 2 i j } into one sequence { B i j } i = 1 , j = 1 N d , 2 N d , denoting their Fourier coefficients by { B ^ i j , n } n Z , and rearranging them (for each n) in vectors of length 2 N d 2 , { B ^ i , n } i = 1 , n Z 2 N d 2 . For a given D ^ ¯ b , the normal equations for the least squares problem (21) induce the linear system A a = b for the splines’ coefficients a = ( { a 1 i j } i , j = 1 N d , { a 2 i j } i , j = 1 N d ) , where:
A i , j = m , n = M M [ Re ( B ^ i , n ) Re ( B ^ j , n ) + Im ( B ^ i , n ) Im ( B ^ j , n ) ] , 1 i 2 N d 2 ,
and
b i = m , n = M M [ Re ( B ^ i , n ) Re ( f ^ n ) + Im ( B ^ i , n ) Im ( f ^ n ) ] , 1 i 2 N d 2 .
For a given choice of b ¯ = { b i j } , the coefficients { a 1 i j } i , j = 1 N d , { a 2 i j } i , j = 1 N d are obtained by solving a linear system of equations, and properly rearranging the solution. However, finding the optimal b ¯ is a non-linear problem that requires an iterative process and is much more expensive.
Remark 3.
Representing the singularity curve of the approximation S as the zero level set of the bivariate spline function D b ¯ is the way to achieve a smooth control over the approximation. As a result, the objective function in (21) varies smoothly with respect to the spline coefficients { b i j } .
Remark 4.
In principle, the above framework is applicable to cases where f is combined of k functions defined on k disjoint subdomains of [ 0 , 1 ] 2 . The implementation, however, is more involved. The main challenge is to find a good first approximation to the curves separating the subdomains. In this context, for our case of two subdomains, we further assume for simplicity that the separating curve Γ * is bijective.
Here again we choose to demonstrate the whole approximation procedure alongside a specific numerical example.

3.2.1. The Approximation Procedure—A Numerical Example

Consider a piecewise smooth function on [ 0 , 1 ] 2 with a jump singularity across the curve Γ * which is the quarter circle defined by x 2 + y 2 = 0.5 . The test function is shown in Figure 11 and is defined as
f ( x , y ) = ( x 2 + y 2 0.5 ) sin ( 10 ( x + y ) ) x 2 + y 2 0.5 , ( x 2 + y 2 0.5 ) sin ( 10 ( x + y ) ) + 2 x x 2 + y 2 < 0.5 .
In the univariate case, in Section 2.2.1, we use the Gibbs phenomenon in order to find an initial approximation s 0 to the singularity location s * . The same idea, with some modifications to the 2D case, is applied here. The truncated Fourier sum
f 50 ( x , y ) = m , n = 50 50 f ^ m n e 2 π i m x e 2 π i n y .
gives an approximation to f, but the approximation suffers from a Gibbs phenomenon near the boundaries of the domain and near the singularity curve Γ * . We evaluated f 50 on a 400 × 400 mesh on [ 0 , 1 ] 2 , and enhanced the Gibbs effect by applying first order differences along the x-direction. The results are depicted in Figure 12. The locations of large x-direction differences and of large y-direction differences within [ 0 , 1 ] 2 indicate the location of Γ * .
Building the initial approximation D b ¯ 0
Searching along 50 horizontal lines (x-direction) for maximal x-direction differences, and along 50 vertical lines (y-direction) for maximal y direction differences, we have found 72 such maximum points, which we denote by P 0 . We display these points (in red) in Figure 13, on top of the curve Γ * (in blue). Now we use these points to construct the spline D b ¯ 0 , whose zero level curve is taken as the initial approximation to Γ * . To construct D b ¯ 0 we first overlay on [ 0 , 1 ] 2 a net of 11 × 11 points, Q 0 . These are the green points displayed in Figure 14.
To each point in Q 0 we assign the value of its distance from the set P 0 , with a plus sign for points which are on the right or above P 0 , and a minus sign for the other points. To each point in P 0 we assign the value zero. The spline function D b ¯ 0 is now defined by the least-squares approximation to the values at all the points P 0 Q 0 . We have used here tensor product splines of order 10, on a uniform mesh with knots’ distance = 0.1 . We denote the level curve zero of the resulting D b ¯ 0 as Γ 0 , and this curve is depicted in yellow in Figure 14. It seems that Γ 0 is already a good approximation to Γ * (in blue), and thus it is a good starting point for achieving the minimization target (21).
Improving the approximation to Γ * , and building the two approximants
Starting from D b ¯ 0 we use a quasi-Newton method for iterative improvement of the approximation to Γ * . The expensive ingredient in the computation procedure is the need to recompute the Fourier coefficients of the B-splines for any new set of coefficients b ¯ of D b ¯ . We recall that we need ( 2 M + 1 ) 2 of these coefficients for each B-spline, and we have 2 N d 2 B-splines. In the numerical example we have used M = 40 and N d = 19 . To illustrate the issue we present in Figure 15 one of those B-spline whose support intersects the singularity curve. When the singularity curve is updated, the Fourier coefficients of this B-spline are recalculated.
Remark 5.
Calculating Fourier coefficients of the B-splinesCalculating the Fourier coefficients of the B-splines is the most costly step in the approximation procedure. For the univariate case the Fourier coefficients of the B-splines can be computed analytically. For a smooth d-variate function f : [ 0 , 1 ] d R , with no singularity within the unit cube [ 0 , 1 ] d , piecewise Gauss quadrature may be used to compute the Fourier coefficients with high precision. The non-smooth multivariate case is more difficult, and more expensive. However, we noticed that using low precision approximations for the Fourier coefficients of the B-splines is fine. For example, in the above example, we have employed a simple numerical quadrature combined with fast Fourier transform, and we obtained the Fourier coefficients with a relative error ∼ 10 5 . Yet the resulting approximation error is small f S < 5 × 10 6 , as seen in Figure 18.
Using one quasi-Newton step we obtained new spline coefficients b ¯ 1 and an improved approximation Γ 1 to Γ * as the zero level set of D b ¯ 1 . Stopping the procedure at this point yields approximation results as shown in the figures below. Figure 16 shows the approximation error f S on [ 0 , 1 ] 2 \ U , where U is a small neighborhood of Γ * . Figure 17 shows, in green, log 10 of the magnitude of the giver Fourier coefficients f ^ m n and, in blue, log 10 of the Fourier coefficients of the difference f S . We observe a reduction of three orders of magnitude between the two.
Applying four quasi-Newton iterations took ∼24 min execution time. The approximation of Γ * by the zero level set of D b ¯ 4 is now with an error of 10 9 . The consequent approximation error to f is reduced as shown in Figure 18, and the Fourier coefficients of the error are reduced by 5 orders of magnitude, as shown in Figure 19.

3.2.2. The 2D Approximation Procedure

Let us sum up the suggested approximation procedure:
(1)
Choose the approximation space Π 1 for approximating f 1 and f 2 and the approximation space Π 2 for approximating Γ * .
(2)
Define the number of Fourier coefficients to be used for building the approximation such that
( 2 M + 1 ) 2 2 dim ( Π 1 ) + dim ( Π 2 ) .
(3)
Find first approximation to Γ * :
(a)
Compute a partial Fourier sum and locate maximal first order differences along horizontal and vertical lines to find points P 0 near Γ * , with assigned values 0.
(b)
Overlay a net of points Q 0 as in Figure 14, with assigned signed-distance values.
(c)
Compute the least-squares approximation from Π 2 to the values at P 0 Q 0 , denote it D b ¯ 0 .
(4)
Calculate the first ( 2 M + 1 ) 2 Fourier coefficients of the basis functions of Π 1 , truncated with respect to the zero level curve of D b ¯ 0 .
(5)
Use the above Fourier coefficients to compute the approximation to f 1 and f 2 by solving the system of linear equation defined by (22), (23).
(6)
Update D b to improve the approximation to Γ * , by performing quasi-Newton iterations to reduce the objective function in (21).
(7)
Go back to (4) to update the approximation.

3.2.3. Lower Order Singularities

Let us assume that f ( x , y ) is a continuous function, and that f x ( x , y ) is discontinuous across the singularity curve Γ * . In this case we cannot use the Gibbs phenomenon effect to approximate the singularity curve. However, the Fourier coefficients
g ^ m n = i m f ^ m n ,
represent a function g that has discontinuity across Γ * , and the above procedure for approximating Γ * can be applied.

3.3. Error Analysis

We consider the non-smooth bivariate case, where f is a combination of two smooth parts, f 1 on Ω 1 and f 2 on Ω 2 , separated by a smooth curve Γ * . Throughout the paper we approximate f using spline functions. In this section we consider approximations by general approximation spaces. Let Π 1 be the approximation space for approximating the smooth pieces constituting f, and let Π 2 be the approximation space used for approximating the singularity curve. The following assumption characterize and quantify the assumptions about the function f and its singularity curve Γ * in terms the ability to approximate them using the approximation spaces Π 1 , Π 2 .
Assumption 1.
We assume that Π 1 and Π 2 are finite dimensional spaces of dimensions N 1 and N 2 respectively.
Assumption 2.
We assume that f 1 and f 2 are smoothly extendable to [ 0 , 1 ] 2 and d i s t [ 0 , 1 ] 2 ( f 1 , Π 1 ) ϵ , d i s t [ 0 , 1 ] 2 ( f 2 , Π 1 ) ϵ .
Assumption 3.
For p Π 2 , let us denote by Γ 0 ( p ) the zero level curve of p within [ 0 , 1 ] 2 . we assume there exists p Π 2 such that
d H ( Γ * , Γ 0 ( p ) ) δ ,
where d H denotes the Hausdorff distance.
We look for an approximation S to f which is a combination of two components, p 1 Π 1 in Ω ˜ 1 and p 2 Π 1 in Ω ˜ 2 , separated by Γ 0 ( p ) , p Π 2 , such that ( 2 M + 1 ) 2 Fourier coefficients of f and S are matched in the least-squares sense:
F ( p 1 , p 2 , p ) = m , n = M M | f ^ m n S ^ m n | 2 m i n i m u m .
Assumption 4.
Consider the above function S constructed by a triple ( p 1 , p 2 , Γ 0 ( p ) ) , p 1 , p 2 Π 1 , p Π 2 . We assume that there is a Lipschitz continuous inverse mapping from the ( 2 M + 1 ) 2 Fourier coefficients of S to the triple ( p 1 , p 2 , Γ 0 ( p ) ) :
{ S ^ m n } m , n = M M ( p 1 , p 2 , Γ 0 ( p ) ) .
Remark 6.
To enable the above property we choose M so that
( 2 M + 1 ) 2 > 2 N 1 + N 2 .
The topology in the space of triples ( p 1 , p 2 , Γ 0 ( p ) ) is in terms of the maximum norm for the first two components and the Hausdorff distance for the third component.
Proposition 1.
Let f 1 , f 2 , Γ * , Π 1 and Π 2 satisfy Assumptions 1, 2, 3 and 4. Then the triple ( p 1 * , p 2 * , p * ) minimizing (27) provides the following approximation bounds:
f 1 p 1 * , Ω 1 * C 1 M ϵ + C 2 M δ ,
f 2 p 2 * , Ω 2 * C 1 M ϵ + C 2 M δ ,
and
d H ( Γ * , Γ 0 ( p * ) ) C 3 M ϵ + C 4 M δ ,
where Ω 1 * and Ω 2 * are separated by Γ 0 ( p * ) .
Proof. 
By Assumptions 2, 3 it follows that there exists an approximation S defined as above by a triple ( p ¯ 1 , p ¯ 2 , p ¯ ) , such that
f 1 p ¯ 1 , [ 0 , 1 ] 2 ϵ ,
f 2 p ¯ 2 , [ 0 , 1 ] 2 ϵ ,
and
d H ( Γ * , Γ 0 ( p ¯ ) ) δ .
Building an approximation S ¯ to f as above by a triple ( p ¯ 1 , p ¯ 2 , p ¯ ) , we can estimate its Fourier coefficients using the above bounds, and it follows that
| f ^ m n S ¯ ^ m n | ϵ + L δ , M m , n M .
Therefore,
m i n { F ( p 1 , p 2 , p ) } ( 2 M + 1 ) 2 ( ϵ + L δ ) 2 .
Let
p 1 * , p 2 * , p * = arg min m , n = M M | f ^ m n S ^ m n | 2 .
The approximation S * to f is the combination of the two components, p 1 * in Ω 1 * and p 2 * in Ω 2 * , where Ω 1 * and Ω 2 * are separated by Γ 0 ( p * ) .
Using the bound in (37) it follows that
| f ^ m n S ^ m n * | ( 2 M + 1 ) ( ϵ + L δ ) , M m , n M .
In view of (36) and (39) if follows that
| S ¯ ^ m n S ^ m n * | ( 2 M + 2 ) ( ϵ + L δ ) , M m , n M .
Using Assumption 4, the bound (40) implies
p 1 * p ¯ 1 , Ω 1 * C ( 2 M + 2 ) ( ϵ + L δ ) ,
p 2 * p ¯ 2 , Ω 2 * C ( 2 M + 2 ) ( ϵ + L δ ) ,
and
d H ( Γ 0 ( p * ) , Γ 0 ( p ¯ ) ) C ( 2 M + 2 ) ( ϵ + L δ ) .
The approximation result now follows by considering the inequalities (41)–(43), together with the inequalities (33)–(35), and applying the triangle inequality. □

Validity of the Approximation Assumptions

Let us check the validity of Assumptions 1, 2, 3 and 4 for the approximation tools suggested in Section 3.2 and used in the above numerical tests.
We assume that f 1 , f 2 C m [ 0 , 1 ] 2 , and that Γ * is a C m curve. To construct the approximation to f 1 and f 2 we use the space Π 1 of kth degree tensor-product splines with equidistant knots’ distance d in each direction, k m . The approximation to Γ * is obtained using the approximation space Π 2 of th degree tensor product splines with equidistant knots’ distance h in each direction, m . dim ( Π 1 ) = ( 1 / d + k 1 ) 2 N d 2 , dim ( Π 2 ) = ( 1 / h + 1 ) 2 N h 2 , and for both spaces we use the B-spline basis functions. Assumptions 2 and 3 are fulfilled with ϵ = C 1 d k and δ = C 2 h .
Assumption 4 is more challenging. To define the mapping
{ S ^ m n } m , n = M M ( p 1 , p 2 , Γ 0 ( p ) ) ,
we use the same procedure Section 3.2.2 for defining the approximation to f:
We represent p and p 1 , p 2 using the B-spline basis function as in (18), (19) and (20), respectively. Each triple ( p 1 , p 2 , p ) defines a piecewise spline approximation T ( x , y ) , and we look for the approximation T(x,y) such that ( 2 M + 1 ) 2 Fourier coefficients of T match the Fourier coefficients { S ^ m n } m , n = M M in the least-squares sense:
{ a 1 i j } i , j = 1 N d , { a 2 i j } i , j = 1 N d , { b i j } i , j = 1 N h = arg min m , n = M M | S ^ m n T ^ m n | 2 .
Out of all the possible solutions of the above problem we look for the one with minimal coefficients’ norm, i.e., minimizing
i , j = 1 N d a 1 i j 2 + i , j = 1 N d a 2 i j 2 .
Following the procedure of Section 3.2.2, we observe that every step in the procedure is smooth with respect to its input. Possible non-uniqueness in solving the linear system of equations on step (5) is resolved by using the generalized inverse. Therefore, the composition of all the steps is also a smooth function of the input, which implies the validity of Assumption 4.

4. The 3D Case

Numerical Example—The Smooth 3D Case

We consider the test function
f ( x , y , z ) = ( x 2 + y 2 + z 2 0.5 ) sin ( 4 ( x + y z ) ) ,
assuming only its Fourier coefficients are given. We have used only 10 3 Fourier coefficients and constructed an approximation using 5th-degree tensor product splines with equidistant knots’ distance d = 0.1 in each direction. For this case, the matrix A is of size 15 3 × 15 3 , and c o n d ( A ) = 1.2 × 10 22 . Again, we have employed the iterative refinement algorithm to obtain a high precision solution. The test function is shown in Figure 20. The error in the resulting approximation is displayed in Figure 21.

5. Concluding Remarks

The basic crucial assumption behind the presented Fourier acceleration strategy is that the underlying function is piecewise ‘nice’. That is, piecewisely, the function can be well approximated by a suitable finite set of basis functions. The Fourier series of the function may be given to us as a result of the computational method dictated by the structure of the mathematical problem at hand. In itself, the Fourier series may not be the best tool for approximating the desired solution, and yet it contains all the information about the requested function. Utilizing this information we can derive high accuracy piecewise approximations to that function. The simple idea is to make the approximation match the coefficients of the given Fourier series. The suggested method is simple to implement for the approximation of smooth non-periodic functions in any dimension. The case of non-smooth functions is more challenging, and a special strategy is suggested and demonstrated for the univariate and bivariate cases. The paper contains a descriptive graphical presentation of the approximation procedure, together with a fundamental error analysis.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Gottlieb, D.; Shu, C.W. On the Gibbs phenomenon and its resolution. SIAM Rev. 1977, 39, 644–668. [Google Scholar] [CrossRef] [Green Version]
  2. Tadmor, E. Filters, mollifiers and the computation of the Gibbs phenomenon. Acta Numer. 2007, 16, 305–378. [Google Scholar] [CrossRef]
  3. Gelb, A.; Tanner, J. Robust reprojection methods for the resolution of the Gibbs phenomenon. Appl. Comput. Harmon. Anal. 2006, 20, 3–25. [Google Scholar] [CrossRef]
  4. Eckhoff, K.S. Accurate and efficient reconstruction of discontinuous functions from truncated series expansions. Math. Comput. 1993, 61, 745–763. [Google Scholar] [CrossRef]
  5. Eckhoff, K.S. Accurate reconstructions of functions of finite regularity from truncated Fourier series expansions. Math. Comput. 1995, 64, 671–690. [Google Scholar] [CrossRef]
  6. Eckhoff, K.S. On a high order numerical method for functions with singularities. Math. Comput. 1998, 67, 1063–1087. [Google Scholar] [CrossRef] [Green Version]
  7. Batenkov, D. Complete algebraic reconstruction of piecewise-smooth functions from Fourier data. Math. Comput. 2015, 84, 2329–2350. [Google Scholar] [CrossRef] [Green Version]
  8. Nersessian, A.; Poghosyan, A. On a rational linear approximation of Fourier series for smooth functions. J. Sci. Comput. 2006, 26, 111–125. [Google Scholar] [CrossRef]
  9. Levin, D.; Sidi, A. Extrapolation methods for infinite multiple series and integrals. J. Comput. Methods Sci. Eng. 2001, 1, 167–184. [Google Scholar] [CrossRef]
  10. Sidi, A. Acceleration of convergence of (generalized) Fourier series by the d-transformation. Ann. Numer. Math. 1995, 2, 381–406. [Google Scholar]
  11. Wilkinson, J.H. Rounding Errors in Algebraic Processes; Prentice-Hall: Englewood Cliffs, NJ, USA, 1963. [Google Scholar]
  12. Moler, C.B. Iterative refinement in floating point. J. ACM 1967, 14, 316–321. [Google Scholar] [CrossRef]
Figure 1. The B-splines used in Example 1.
Figure 1. The B-splines used in Example 1.
Axioms 09 00088 g001
Figure 2. The test function (left) and the spline approximation error (right).
Figure 2. The test function (left) and the spline approximation error (right).
Axioms 09 00088 g002
Figure 3. log 10 of the given Fourier coefficients (blue), and of the Fourier coefficients of the approximation error (red).
Figure 3. log 10 of the given Fourier coefficients (blue), and of the Fourier coefficients of the approximation error (red).
Axioms 09 00088 g003
Figure 4. A partial Fourier sum (left) and its first differences (right).
Figure 4. A partial Fourier sum (left) and its first differences (right).
Axioms 09 00088 g004
Figure 5. The graph of the error f ^ S ^ as a function of s near s * = 0.5 .
Figure 5. The graph of the error f ^ S ^ as a function of s near s * = 0.5 .
Axioms 09 00088 g005
Figure 6. The approximation error for the 1D non-smooth case.
Figure 6. The approximation error for the 1D non-smooth case.
Axioms 09 00088 g006
Figure 7. log 10 of the given Fourier coefficients (blue), and of the Fourier coefficients of the approximation error (red).
Figure 7. log 10 of the given Fourier coefficients (blue), and of the Fourier coefficients of the approximation error (red).
Axioms 09 00088 g007
Figure 8. The test function for the smooth 2D case.
Figure 8. The test function for the smooth 2D case.
Axioms 09 00088 g008
Figure 9. The approximation error f S 0.1 [ 10 ] .
Figure 9. The approximation error f S 0.1 [ 10 ] .
Axioms 09 00088 g009
Figure 10. log 10 of the Fourier coefficients before (green), and after (blue).
Figure 10. log 10 of the Fourier coefficients before (green), and after (blue).
Axioms 09 00088 g010
Figure 11. The test function for the 2D non-smooth case.
Figure 11. The test function for the 2D non-smooth case.
Axioms 09 00088 g011
Figure 12. First order x-direction differences of a truncated Fourier sum—notice the relatively high values at the boundary and near the singularity curve.
Figure 12. First order x-direction differences of a truncated Fourier sum—notice the relatively high values at the boundary and near the singularity curve.
Axioms 09 00088 g012
Figure 13. The singularity curve Γ * (blue) and points of maximal first differences of f 50 .
Figure 13. The singularity curve Γ * (blue) and points of maximal first differences of f 50 .
Axioms 09 00088 g013
Figure 14. The singularity curve Γ * (blue) and points of maximal first differences of f 50 .
Figure 14. The singularity curve Γ * (blue) and points of maximal first differences of f 50 .
Axioms 09 00088 g014
Figure 15. One of the tensor product B-splines used for the approximation of f, chopped off by the singularity curve.
Figure 15. One of the tensor product B-splines used for the approximation of f, chopped off by the singularity curve.
Axioms 09 00088 g015
Figure 16. The approximation error with D b ¯ 1 .
Figure 16. The approximation error with D b ¯ 1 .
Axioms 09 00088 g016
Figure 17. The magnitude reduction of the Fourier coefficients with D b ¯ 1 .
Figure 17. The magnitude reduction of the Fourier coefficients with D b ¯ 1 .
Axioms 09 00088 g017
Figure 18. The approximation error with D b ¯ 4 .
Figure 18. The approximation error with D b ¯ 4 .
Axioms 09 00088 g018
Figure 19. The magnitude reduction of the Fourier coefficients with D b ¯ 4 .
Figure 19. The magnitude reduction of the Fourier coefficients with D b ¯ 4 .
Axioms 09 00088 g019
Figure 20. The 3D test function reshaped into 2D.
Figure 20. The 3D test function reshaped into 2D.
Axioms 09 00088 g020
Figure 21. The approximation error graph, reshaped into 2D.
Figure 21. The approximation error graph, reshaped into 2D.
Axioms 09 00088 g021

Share and Cite

MDPI and ACS Style

Levin, D. Reconstruction of Piecewise Smooth Multivariate Functions from Fourier Data. Axioms 2020, 9, 88. https://doi.org/10.3390/axioms9030088

AMA Style

Levin D. Reconstruction of Piecewise Smooth Multivariate Functions from Fourier Data. Axioms. 2020; 9(3):88. https://doi.org/10.3390/axioms9030088

Chicago/Turabian Style

Levin, David. 2020. "Reconstruction of Piecewise Smooth Multivariate Functions from Fourier Data" Axioms 9, no. 3: 88. https://doi.org/10.3390/axioms9030088

APA Style

Levin, D. (2020). Reconstruction of Piecewise Smooth Multivariate Functions from Fourier Data. Axioms, 9(3), 88. https://doi.org/10.3390/axioms9030088

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop