Next Article in Journal
Comparative Analysis of Exact Methods for Testing Equivalence of Prevalences in Bilateral and Unilateral Combined Data with and without Assumptions of Correlation
Next Article in Special Issue
Estimates of Eigenvalues and Approximation Numbers for a Class of Degenerate Third-Order Partial Differential Operators
Previous Article in Journal
From the Crossing Numbers of K5 + Pn and K5 + Cn to the Crossing Numbers of Wm + Sn and Wm + Wn
Previous Article in Special Issue
Tractability of Multivariate Approximation Problem on Euler and Wiener Integrated Processes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Constructing Approximations to Bivariate Piecewise-Smooth Functions

School of Mathematical Sciences, Tel-Aviv University, Tel-Aviv 6997801, Israel
Axioms 2024, 13(7), 428; https://doi.org/10.3390/axioms13070428
Submission received: 27 May 2024 / Revised: 21 June 2024 / Accepted: 22 June 2024 / Published: 26 June 2024

Abstract

:
This paper demonstrates that the space of piecewise-smooth bivariate functions can be well-approximated by the space of the functions defined by a set of simple (non-linear) operations on smooth uniform tensor product splines. The examples include bivariate functions with jump discontinuities or normal discontinuities across curves, and even across more involved geometries such as a three-corner discontinuity. The provided data may be uniform or non-uniform, and noisy, and the approximation procedure involves non-linear least-squares minimization. Also included is a basic approximation theorem for functions with jump discontinuity across a smooth curve.

1. Introduction

High-quality approximations of piecewise-smooth functions from a discrete set of function values present a challenging problem, with applications in image processing and geometric modeling. The univariate problem has been studied by several research groups, and satisfactory solutions can be found in the works of Harten [1], Arandiga et al. [2], Archibald et al. [3,4], and Lipman et al. [5]. However, the 2D problem is still far from being solved, and the 1D methods are not easily adapted to the real 2D case. Furthermore, even the 1D problem is not easily solved in the presence of noisy data. In the 1D problem, we are provided values of a piecewise-smooth function, with or without noise, and the challenge is to approximate the location of the ’singular points’ that separate one smooth part of the function from the other and also to reconstruct the smooth parts. In the 2D case, a piecewise-smooth function on a domain D is defined by a partition of the domain into segments separated by boundary curves (smooth or non-smooth), and the function is smooth in the interior of each segment. By the term smooth, we mean that the derivatives (up to a certain order) of the function are bounded. Of course, the function and/or its derivatives may be discontinuous across a boundary curve between segments. Given the data acquired from such an underlying piecewise-smooth function, the challenge here is to approximate the separating curves (the singularity curves) and to reconstruct the smooth parts. Note that, apart from noise in the function values, there may also be ’noise’ in the location of the separating curves (as demonstrated in Section 3.2). The problem of approximating piecewise-smooth functions is a model problem for image processing algorithms, and some sophisticated classes of wavelets and frames have been designed to approximate such functions. For example, see Candes and Donoho [6]. A method for the approximation of piecewise-smooth functions would also be useful for the reconstruction of surfaces in CAGD or in 3D computer graphics, e.g., via the moving least-squares framework.
It is well-established now that only non-linear methods may achieve optimal approximation results in ’non-smooth’ spaces, e.g., see Binev et al. [7]. In this paper, we are going back to using the ’good old’ splines with uniform knots as our basis functions for the approximation, but we add to the game some (simple) non-linear operations on the space of splines. All the non-linearity used here can be expressed by the sign operation. We remark that the choice of the spline basis is not essential here, and other basis functions may be utilized within the same framework.
We present the idea and the proposed approximation algorithms through a series of illustrative examples. Building from the derivative discontinuity in the univariate case, we move into normal discontinuity and jump discontinuity across the curves in the bivariate case, with some non-trivial topologies of the singularity curves. We shall also present a basic approximation result for the case of jump discontinuity across a smooth curve. Altogether, we present a simple yet powerful approach to piecewise-smooth approximation. The suggested method seems to be quite robust to noisy data, and even the univariate version is interesting in this respect. Open issues, such as the development of efficient algorithms and further approximation analysis, are left for future research.

2. Non-Smooth Univariate Approximations

To demonstrate the main idea, we start with the univariate problem: assume we know that our underlying univariate function f is continuous, f C [ a , b ] , and that it has one point of discontinuity in its first derivative in s [ a , b ] , and that f s > f s + . Then, it makes sense to look for two smooth functions, g [ r ] and g [ ] , where g [ ] approximates f on the left segment [ a , s ] and g [ r ] approximates f on the right segment [ s , b ] , such that
f ( x ) = min g [ ] ( x ) , g [ r ] ( x ) , x [ a , b ] .
The function g [ r ] may be viewed as a smooth extension of f [ a , s ] to the whole interval [ a , b ] , and g [ l ] as a smooth extension of f [ s , b ] to [ a , b ] . There are many pairs of smooth functions g [ r ] and g [ ] that satisfy the above relation. Therefore, one may suspect that the problem of finding such a pair is ill-conditioned. Let us check this by trying a specific algorithm for solving this problem and check it on a few examples. It becomes clear from these examples that the approximations by g [ r ] and g [ ] are well-defined in the relevant intervals, i.e., g [ r ] in ( s , b ] and g [ ] in [ a , s ) . To approximate functions g [ r ] and g [ ] , we use cubic spline basis functions, with equidistant knots t j = a + ( j 1 ) δ , j = 1 , , k , δ = ( b a ) / k .
Assuming we are provided data f x i x i X [ a , b ] , we look for g [ r ] and g [ ] such that
F 1 ( p ) = x i X f x i min g [ ] x i , g [ r ] x i 2 minimum
Here, p stands for the set of parameters used in the representation of the unknown functions g [ r ] and g [ ] . We use the convenient representation
g [ r ] ( x ) = j = 1 k α j B j ( x ) , g [ ] ( x ) = j = 1 k β j B j ( x )
B j j = 1 k are, for example, the basis functions for cubic spline interpolant with the not–knot end conditions, satisfying B j t i = δ i , j . Hence, in (2), p stands for the unknown splines’ coefficients, p = α j j = 1 k β j j = 1 k .
In Figure 1 and Figure 2, we see the results of reconstructing piecewise-smooth functions from exact data and noisy data. In both cases, X = { 3 : 0.02 : 3 } and t j = { 3 : 1.5 : 3 } . The solution of the optimization problem (2.2) is depicted in a bold line. The underlying function f is generated as f ( x ) = min f [ ] ( x ) , f [ r ] ( x ) , and the graphs of these two generating functions are depicted by dashed lines. The fine continuous lines in the figures represent the functions g [ r ] and g [ ] , which, as we see in those graphs, approximate f [ r ] and f [ ] accordingly, and the approximation is good only in the appropriate regions. Here, k = 5 , and thus we have 10 unknown parameters to solve for. The optimization has been performed using a differential evolution procedure, using the data values at the points t j as the starting points of the iterations for both α j and β j .
Remark 1.
An alternative representation of f in (1) is
f ( x ) = g [ ] ( x ) g [ ] ( x ) g [ r ] ( x ) + , x [ a , b ] ,
where
( t ) + = t t 0 0 t < 0 .
Hence, we can replace the cost functional (2) by
F 2 ( p ) = x i X f x i g 1 x i g 2 x i + 2
Here, p stands for the set of parameters in the representation of the unknown spline functions g 1 and g 2 , with the advantage that, here, only one unknown spline function, g 2 , influences the functional in a non-linear manner. We shall further discuss such semi-linear cases in the bivariate case.

The Case f ( s ) < f ( s + ) and More

Obviously, in this case, we should replace the min operation within (2) by a max operation. In case we have two break points s 1 and s 2 in [ a , b ] , e.g., with f s 1 < f s 1 + and f s 2 > f s 2 + , then we may look for three unknown spline functions, g 1 , g 2 , g 3 , such that min g 1 , max g 2 , g 3 approximates the data in the least-squares sense, and so on. To avoid high complexity, we suggest subdividing [ a , b ] into intervals, partially overlapping, each containing at most one break point, and blending the individual local approximations into a global one over [ a , b ] . We shall further discuss and demonstrate this strategy in the 2D case.
The problem of approximating piecewise-smooth univariate data has been investigated by many authors. A prominent approach to the problem involves the so-called essentially nonoscillatory (ENO) and subcell resolution (SR) schemes introduced by Harten [1]. The ENO scheme constructs a piecewise-polynomial interpolant on a uniform grid that, loosely speaking, uses the smoothest consecutive data points in the vicinity of each data cell. The SR technique approximates the singularity location by intersecting two polynomials, each from another side of the suspected singularity cell. In the spirit of ENO-SR, many interesting works have been written using this simple but powerful idea. Recently, Arandiga et al. [2] provided a rigorous treatment to a variation of the technique, proving the expected approximation power on piecewise-smooth data. Archibald et al. [3,4] further improved the ENO idea by introducing polynomial annihilation techniques for locating the cell that contains the singularity. A recent paper by Lipman et al. [5] used quasi-interpolation operators for this problem. Yet, the extension of the univariate methods to the 2D case is not obvious and is not simple. In [2], after locating an interval of possible singularity using ENO [1], two polynomial approximations were defined, each one approximating the data on one side of the singularity, and their intersection was used to approximate the singularity location. The method suggested here is similar since we also look for two different approximations related to the two sides of a singularity. However, the least-squares optimization approach enables natural extension to interesting cases in the bivariate case. The singularity localization is integrated within the approximation procedure, and thus it is less sensitive to noise. In the next section, we demonstrate that the simple idea represented in Section 2 has the potential to solve some non-trivial bivariate approximation problems.

3. Non-Smooth Bivariate Approximations

As demonstrated in the 1D case, the non-linear space of functions defined by uniform splines, together with the simple operations min and max, may be used to approximate univariate piecewise-smooth continuous functions. In the bivariate case, we consider functions with derivative discontinuities or jump discontinuities across curves. The objectives of this section are fourfold:
  • To exhibit a range of piecewise-smooth bivariate functions that can be represented by simple non-linear operations (as min and max) on smooth functions.
  • To suggest some non-linear least-squares approximation procedures for the approximation of piecewise-smooth bivariate functions.
  • To present interesting examples of approximating piecewise-smooth bivariate functions given noisy data.
  • To provide a basic approximation result.

3.1. Normals’ Discontinuity across Curves—Problem A

We start with a numerical demonstration of a direct extension of the univariate approach to the approximation of continuous piecewise-smooth bivariate functions. Recalling the 1D discussion, the choice of a min or a max operation depends on the sign of f s + f s . In the 2D case, we refer to an analogous condition involving the slopes of the graph along the singularity curves. A discontinuity (singularity) of the normals of a bivariate function f is said to be convex along a curve γ if the exterior angle of the graph of f at every point along the curve is < π (e.g., see Figure 3), and it is considered to be concave if the exterior angles are > π . In a neighborhood of a concave singularity (discontinuity) curve, the function may be described as the minimum between two (or more) smooth functions, and near a convex singularity curve the function may be defined as the maximum of two or more smooth functions. Let us consider the following noisy data, f x i x i X , taken from a function with convex singularities. For the numerical experiment, we take X as the set of data points on a square grid of mesh size h = 0.125 in D = [ 2 , 2 ] × [ 2 , 5 ] , and the provided noisy data are shown in Figure 4. In this case, the function has a ‘3-corner’-type singularity, where f has convex singularity along three curves meeting at a point. Therefore, we look for three spline functions, g 1 , g 2 , g 3 , so that
f ( x ) max g 1 ( x ) , g 2 ( x ) , g 3 ( x ) ,
where g 1 , g 2 , g 3 solve the non-linear least-squares problem:
F A ( p ) = x i X f x i max g 1 x i , g 2 x i , g 3 x i 2 minimun .
Within this example, we would also like to show how to blend two non-smooth approximations. Therefore, we consider the approximation problem on two partially overlapping sub-domains of D , D 1 = [ 2 , 2 ] × [ 2 , 2 ] D and D 2 = [ 2 , 2 ] × [ 1 , 5 ] D . After solving the approximation problem separately on each sub-domain, the two approximations will be blended into a global one. On each sub-domain, the unknown functions g i i = 1 3 are chosen to be cubic spline functions with a square grid of knots of grid size δ = 2 . Here again, the triplet of functions g 1 , g 2 , g 3 that solve the minimization problem (8) are not unique. However, it turns out that the approximation to f is well-defined by (8); that is, the parts of g i i = 1 3 that are relevant to max g 1 , g 2 , g 3 are well-defined.
Let us first consider the approximation on the sub-domain D 1 = [ 2 , 2 ] × [ 2 , 2 ] D . For the particular data shown on the left plot in Figure 5, the solution of (8) yields the piecewise-smooth approximation depicted on the right plot. In this plot, we see the full graphs of the three functions g i i = 1 3 (for this sub-domain), while the approximation is only the upper part (the maximal values) of these graphs. The solution to the optimization problem (8) has been found using a differential evolution procedure [8]. As an initial guess for the three unknown functions, we take, as in the univariate case, the spline function that approximates the data over the whole domain D 1 . Next, we look for the approximation on D 2 = [ 2 , 2 ] × [ 1 , 5 ] D , which partially overlaps D 1 . The relevant data and the resulting approximation are shown in Figure 6.
To achieve an approximation over the whole domain D = [ 2 , 2 ] × [ 2 , 5 ] , we now explain how to blend the two approximations defined on D 1 and on D 2 . The singularity curves of the two approximations do not necessarily overlap on D 1 D 2 . Therefore, a direct blending of the two approximations will not provide a smooth transition of the singularity curve. The appropriate blending should be completed between the corresponding spline functions generating these singularity curves. On each sub-domain, the approximation is defined by another triplet of splines g i i = 1 3 . For the approximation over D 2 , only two of the splines are active in the final max operation, and the graph of the third spline is below the maximum of the other two. To prepare for the blending step, we have to match appropriate pairs of both triplets, and this can easily be completed by proximity over the blending zone D 1 D 2 . The final approximation over D is defined by max g ˜ 1 , g ˜ 2 , g ˜ 3 , where g ˜ i i = 1 3 are defined by blending the appropriate pairs using the simplest C 1 blending function. The resulting blended approximation over D, to the data provided in Figure 4, is displayed in Figure 3.

3.2. Jump Discontinuity across a Curve—Problem B

Another interesting problem in bivariate approximation is the approximation of a function with a discontinuity across a curve. Consider the case of a function defined over a domain D, with a discontinuity across a (simple) curve γ , separating D into two sub-domains, D + and D . We assume that f D + and f D are smooth on D + and D , respectively. Such problems, and especially the problem of approximating γ , appear in image segmentation. Efficient algorithms for constructing γ that are useful even for more involved data are the method of snakes, or active contours, and the level-set method. The method of snakes, introduced in [9], iteratively finds contours that approach the contour γ separating two distinctive regions in an image, with applications to shape modeling [10]. The level-set method, first suggested in [11], is also an iterative method for approximating γ using a variational formulation for minimizing appropriate energy functionals. More recent algorithms for the approximation of piecewise-smooth functions in two and three dimensions have been introduced in [12], using data on a grid, and in [13], for scattered data. A variational spline level-set approach has been suggested in [14]. Here, the focus is on simultaneously approximating the curve γ and the function on D + and D . This goal is reflected in the cost functional used below, and, as demonstrated in Section 3.5, we can also handle non-simple topologies of γ , such as a three-corner discontinuity. The following procedure for treating a jump singularity comes as a natural extension of the framework for approximating a continuous function with derivative discontinuity, as suggested in Section 3.2:
Again, we look for three spline functions, g γ , g + , and g , such that the zero-level set γ ˜ of g γ approximates the singularity curve γ , g + approximates f on D + , and g approximates f on D . Formally, we would like to minimize the following objective function:
F B ( p ) = g γ x i > 0 f x i g + x i 2 + g γ x i < 0 f x i g x i 2 minimun .
Note that the non-linearity of the minimization problem here, which we denote as Problem B, is due to the non-linear operation of sign checking. This approximation problem may seem to be more complicated than Problem A of the previous section, but, actually, it is somewhat simpler. While in problem A the unknown coefficients of all three splines appear in a non-linear form in the objective function F A (due to the max operation), here, only the coefficients of g γ influence the value of F B in a non-linear manner. This is due to the observation that, once g γ is known, the functions g + and g that minimize F B are defined via a linear system of equations. Given this observation, and for reasons that will be clarified below, we use a slight variation of the optimization problem. Namely, we look for a function g γ that minimizes F B , where g + and g are defined by the (linear) least-squares problem:
F ˜ B ( p ) = x i X + h f x i g + x i 2 + x i X h f x i g x i 2 minimum ,
where γ ˜ denotes the zero-level set of g γ , h is the ’mesh size’ in the data set X, and
X + h = x i g γ x i > 0 , dist x i , γ ˜ > h , X h = x i g γ x i < 0 , dist x i , γ ˜ > h .
For non-noisy data, we would like to achieve an O h 4 approximation order to f D + and f D on D + and D , respectively. This can be obtained by using proper boundary conditions in the computation of g + and g , e.g., by extending the data by local polynomial approximations. We thus consider a third version of the least-squares problem for g + and g :
F ˜ B ( p ) = X ^ + h f ^ + x i g + x i 2 + x ^ h f ^ x i g x i 2 m i n i m u m .
In (11) X ^ + h = X X h and X ^ h = X X + h , f ^ + x i is the provided data f x i on X + h and the extension of these data into X ^ + h X + h , and f ^ x i is the provided data f x i on X h and the extension of these data on X ^ h X h . The extension operator should be exact for cubic polynomials.
Remark 2.
Since g γ may be defined up to a multiplying factor, we may restrict its unknown coefficients to lie in a compact bounded box, and thus the existence of a global minimizer in (9)–(11) is ensured.
Let us now describe a numerical experiment based on the above framework. The function we would like to approximate is defined on D = [ 3 , 3 ] 2 , and it has a jump discontinuity across a sinusoidal-shaped curve. We may consider two types of noisy data. The first includes noise in the data values, and the second includes noise in the location of the singularity curve γ . The three unknown functions g γ , g + , g are again cubic spline functions with a square grid of knots of grid size δ = 2 . However, the unknown parameters p in F B are just the coefficients of g γ . The other two spline functions are computed within the evaluation procedure of F B by solving the linear system of equations for their coefficients, i.e., the system defined by the least-squares problem (10). The noisy data of the second type (noise in the location of γ ), and the resulting approximation obtained by minimizing (9), are displayed in Figure 7 and Figure 8.
For a function with a more involved shape of singularity curve, we would suggest subdividing the domain into patches, partially overlapping, and then blending the approximations over the individual patches into a global approximation. As in the blending suggested for Problem A, the blending of two approximations to jump discontinuities over partially overlapping patches D 1 and D 2 should be performed on the functions g γ , g + , g that generate the approximations on the different patches. Here, one should take care of the fact that the function g γ is not uniquely defined by the optimization problem (9). Let us denote by g γ , 1 and g γ , 2 the functions generating the singularity curve on D 1 and D 2 , respectively. To achieve a nice blending of the two curves, we suggest scaling one of the two functions, say g γ , 1 , so that α · g γ , 1 g γ , 2 on D 1 D 2 . It is important to match the two functions only on that part of D 1 D 2 that is close to the zero curves defined by g γ , 1 and g γ , 2 .

3.3. Problem B—Approximation Analysis

The approximation problem is as follows: consider a piecewise-smooth function f defined over a domain D, with a discontinuity across a simple, smooth curve γ , separating D into two open sub-domains D + and D . We assume that f D + and f D are smooth, with bounded derivatives of order four on D + and D , respectively, and so is the curve γ . Let X D be a grid of data points of grid size h, and let us consider the approximations for Problem B using bi-cubic spline functions with knots on a grid of size δ = m h ( m > 3 ) . The classical result on approximation by least-squares by cubic splines implies an O h 4 approximation order to a function with bounded derivatives of order four (provided there are enough data points for a well-posed solution). On the other hand, even in the univariate case, the location of a jump discontinuity in a piecewise-smooth function is inherently up to an O ( h ) error. Therefore, the best we can expect from a good approximation procedure for f such as above is the following:
Theorem 1.
Consider Problem B on D and let g γ be a bi-cubic spline function (with knots’ grid size δ = m h ), which provides a local minimum to (9), with g + and g defined by minimizing (11). Denote the segmentation defined by g γ by G + = g γ x i > 0 , x i X and G = x i g γ x i < 0 , x i X . For C > 0 , and for h small enough, there exists such local minimizer g γ such that if x i G + D or x i G D + then dist x i , γ < C h 3 .
Proof. 
The theorem says that the zero-level set of g γ , γ ˜ , separates the data set X well into the two parts, and only data points that are very close to γ may appear in the wrong segment. To prove this result, we first observe that the curve γ can be approximated by the zero-level set of bi-cubic splines with approximation error < C 1 h 4 . One such spline would be s γ , the approximation to the signed distance function related to the curve γ . Fixing g γ = s γ determines g + and g , which minimize F ˜ B for this g γ , and we denote the corresponding value F B s γ . We note that the contribution to the value of F B is O h 4 (as h 0 ) from a point that falls on the right side of γ ˜ , and it is O ( 1 ) from a point on the wrong side of γ ˜ . For a small enough h, only a small number of points x i X will fall in the wrong side of γ ˜ , and any choice of g γ that induces more points in the wrong side will induce a larger value of F B . The minimizing solution induces a value F B F B s γ , and this can be achieved only by reducing the set of ’wrong side’ points. Since g γ = s γ already defines an O h 4 separation approximation, only points that are at distance O h 4 from γ may stay on the wrong side in the local minimizer that evolves by a continuous change in s γ , which reduces F B . □
Corollary 1.
If the least-squares problems defining g + and g by (10) are well-posed, we obtain
f g + , D + C 2 h 4 ,
f g , D C 3 h 4 .
Remark 3.
The above well-posed condition can be checked while computing g + and g . Also, an O h 4 approximation order can be obtained by using proper boundary conditions in the computation of g + and g , e.g., by extending the data by local polynomial approximations, as suggested in (11).
Remark 4.
The need to restrict the set of data points defining g + and g in (10) emerged given the condition needed for the proof of Theorem 1. As shown in the numerical example below, this restriction may be very important in practical applications.

3.4. Noisy Data of the 1st Type

This section demonstrates the performance of the method for the approximation of noisy data of a function with jump discontinuity. Furthermore, we use this example to emphasize the importance of using the restricted sets in F ˜ B rather than using F B . The underlying function and its noisy version are displayed in Figure 9. In the numerical test, we have used the same mesh and knot sizes as in the previous example. In Figure 10, we show the results with and without restricting the set of points that participate in the computation of F B . In the left graph, we note that the approximation in the inner region is infected by wrong values from the outer region, and this is corrected in the right graph where the least-squares approximations use values that are not too close to the discontinuity curve. In Figure 11, we see two approximations to the exact singularity curves (in red), using different knots’ grid sizes, δ = 1.5 and δ = 2 , together with the singularity curve of the underlying function. As expected, a smaller δ enables higher flexibility of g γ and a better approximation to the exact curve.

3.5. Three-Corner Jump Discontinuity—Problem C

Combining elements from Problems A and B, we can now approach the more complex problem of a three-corner discontinuity. Consider the case of a function defined over a domain D, with a discontinuity across three curves meeting at a three-corner discontinuity, subdividing D into three sub-domains, D 1 , D 2 , and D 3 , as in Figure 12. We assume that f D i is smooth on D i , i = 1 , 2 , 3 . Following the above discussions, the following procedure is suggested:
We look for three spline functions, g i i = 1 3 , approximating f on D i i = 1 3 , respectively. Here, the approximation of the segmentation into three domains cannot be conducted via a zero-level set approach. Instead, we look for an additional triplet of spline functions, h i i = 1 3 , which define approximations E i i = 1 3 to D i i = 1 3 as follows:
E 1 = x h 1 ( x ) > max h 2 ( x ) , h 3 ( x ) , E 2 = x h 2 ( x ) > max h 1 ( x ) , h 3 ( x ) , E 3 = x h 3 ( x ) > max h 1 ( x ) , h 2 ( x ) .
Denoting u 2 , E = x i E u 2 x i , we would like to minimize the following objective function:
F C ( p ) = f g 1 2 , E 1 + f g 2 2 , E 2 + f g 3 2 , E 3 minimun .
Hence, the segmentation is defined by a max operation as in Problem A. Given a segmentation of D into E i i = 1 3 , the triplet g i i = 1 3 is defined, as in Problem B, by a system of linear equations that defines the least-squares solution of (3.8). To achieve better approximation on D i i = 1 3 , in view of Theorem 1, the least-squares approximation for g i i = 1 3 should exclude data points that are near joint boundaries of E i i = 1 3 .
For a numerical illustration of Problem C and the approximation obtained by minimization of F C , we took noisy data from a function with three-corner discontinuity in D = [ 2 , 2 ] 2 . All the unknown spline functions, g i i = 1 3 and h i i = 1 3 , are bi-cubic with a square grid of knots of grid size δ = 2 . Since only the splines h i i = 1 3 enter in a non-linear way into F C , the minimization problem involves 3 × 9 = 27 unknowns. As in all the previous examples, we have used a differential evolution algorithm to find an approximate solution to this minimization problem. The noisy data and the resulting approximation are shown in Figure 12.

4. Summary and Issues for Further Research

We have introduced a unified framework for approximating functions with normals’ discontinuity or jump discontinuity across curves. The method may be viewed as an extension of the well-known procedures of boolean operations in solid geometry. In this work, it is suggested to use a kind of boolean operation on splines as an approximation tool. Through a series of non-trivial examples, we have presented the potential of this approach to achieve high-quality approximations in many applications. It is interesting to note that all the non-linearity in the suggested approximations can be expressed by the sign operation, or equivalently the ( · ) + operation. The approximation procedure requires high-dimensional non-linear optimization, and thus the complexity of computing the approximations is very high. For all the numerical examples in this paper, we have used a very powerful Matlab code written by Markus Buehren, based upon the method of differential evolution ([8]). The execution time ranges from 1 s for the simple univariate problem to 80 s for bivariate Problem C. The differential evolution algorithm usually finds a local minimum and not ’the global minimizer’. Yet, as demonstrated, it finds for us very good approximations, and it seems to be robust to noise. A main issue for further study would be the acceleration of the optimization process, e.g., by generating good initial candidates for the optimization. Yet, despite the high computational cost, the method may still be very useful for high-quality up-sampling, and for functions (or surfaces) with few singularity curves. In a scene with many discontinuities, we would suggest subdividing the domain into patches, each containing one or two singularity curves. Choosing partially overlapping patches, the local approximations can be blended into a global approximation, as demonstrated in Section 3.2. Another simple idea is based upon Corollary 1, which tells us to ignore a few data points near the approximated singularity curve to attain a higher approximation order.
Other important issues for further research would be the following:
  • Improved optimization: Here, we believe that geometric considerations may be used to significantly accelerate the minimization procedure. Gradient-descent algorithms, similar to those used in [14], may be helpful here.
  • Constructive rules for choosing the grid size for the splines.
  • Other basis functions instead of splines.
  • Using the 1 -norm instead of the 2 -norm in the objective functions.
  • Using the basic procedures presented in this paper within neural network algorithms for the analysis of multivariate data.

Funding

This research received no external funding.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Harten, A. Eno schemes with subcell resolution. J. Comput. Phys. 1989, 83, 148–184. [Google Scholar] [CrossRef]
  2. Arandiga, F.; Cohen, A.; Donat, R.; Dyn, N. Interpolation and approximation of piecewise smooth functions. SIAM J. Numer. Anal. 2005, 43, 41–57. [Google Scholar] [CrossRef]
  3. Archibald, R.; Gelb, A.; Yoon, J. Polynomial fitting for edge detection in irregularly sampled signals and images. SIAM J. Numer. Anal. 2005, 43, 259–279. [Google Scholar] [CrossRef]
  4. Archibald, R.; Gelb, A.; Yoon, J. Determining the locations and discontinuities in the derivatives of functions. Appl. Numer. Math. 2007, 58, 577–592. [Google Scholar] [CrossRef]
  5. Lipman, Y.; Levin, D. Approximating piecewise-smooth functions. Ima J. Numer. Anal. 2010, 30, 1159–1183. [Google Scholar] [CrossRef]
  6. Candes, E.J.; Donoho, D.L. New tight frames of curvelets and optimal representations of objects with piecewise c2 singularities. Commun. Pure Appl. Math. 2004, 57, 219–266. [Google Scholar] [CrossRef]
  7. DeVore, R.; Petrushev, P.; Binev, P.; Dahmen, W. Approximation classes for adaptive methods. Serdica Math. J. 2002, 28, 391–416. [Google Scholar]
  8. Price, J.L.K.; Storn, R. Differential Evolution—A Practical Approach to Global Optimization; Springer: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
  9. Kass, M.; Witkin, A.; Terzopoulos, D. Snakes: Active contour models. Int. J. Comput. Vis. 1988, 1, 321–331. [Google Scholar] [CrossRef]
  10. Malladi, R.; Sethian, J.A.; Vemuri, B.C. Shape modeling with front propagation: A level set approach. IEEE Trans. Pattern Anal. Mach. Intell. 1995, 17, 158–175. [Google Scholar] [CrossRef]
  11. Osher, S.; Sethian, J.A. Fronts propagating with curvature dependent speed: Algorithms based on Hamilton-Jacobi formulations. J. Comput. Phys. 1988, 79, 12–49. [Google Scholar] [CrossRef]
  12. Amat, S.; Levin, D.; Ruiz-Álvarez, J. A two-stage approximation strategy for piecewise smooth functions in two and three dimensions. IMA J. Numer. Anal. 2022, 42, 3330–3359. [Google Scholar] [CrossRef]
  13. Amir, A.; Levin, D. High order approximation to non-smooth multivariate functions. Comput. Aided Geom. Des. 2018, 63, 31–65. [Google Scholar] [CrossRef]
  14. Bernard, O.; Friboulet, D.; Thévenaz, P.; Unser, M. Variational B-spline level-set: A linear filtering approach for fast deformable model evolution. IEEE Trans. Image Process. 2009, 18, 1179–1191. [Google Scholar] [CrossRef] [PubMed]
Figure 1. A univariate example—no noise.
Figure 1. A univariate example—no noise.
Axioms 13 00428 g001
Figure 2. A univariate example—reconstruction in the presence of noise.
Figure 2. A univariate example—reconstruction in the presence of noise.
Axioms 13 00428 g002
Figure 3. The blended approximation over [ 2 , 2 ] × [ 2 , 5 ] .
Figure 3. The blended approximation over [ 2 , 2 ] × [ 2 , 5 ] .
Axioms 13 00428 g003
Figure 4. The noisy data over [ 2 , 2 ] × [ 2 , 5 ] .
Figure 4. The noisy data over [ 2 , 2 ] × [ 2 , 5 ] .
Axioms 13 00428 g004
Figure 5. The noisy data and the three-corner approximation over D 1 .
Figure 5. The noisy data and the three-corner approximation over D 1 .
Axioms 13 00428 g005
Figure 6. The noisy data and the approximation over D 2 .
Figure 6. The noisy data and the approximation over D 2 .
Axioms 13 00428 g006
Figure 7. Discontinuity across a noisy curve [ 3 , 3 ] × [ 3 , 3 ] .
Figure 7. Discontinuity across a noisy curve [ 3 , 3 ] × [ 3 , 3 ] .
Axioms 13 00428 g007
Figure 8. The approximation using noisy curve data.
Figure 8. The approximation using noisy curve data.
Axioms 13 00428 g008
Figure 9. The underlying function and its noisy data of 1st type.
Figure 9. The underlying function and its noisy data of 1st type.
Axioms 13 00428 g009
Figure 10. Two approximations using different point sets in the least-squares approximation.
Figure 10. Two approximations using different point sets in the least-squares approximation.
Axioms 13 00428 g010
Figure 11. Two approximations to the exact singularity curves—using different knots’ grid sizes, δ = 2 and δ = 1.5 .
Figure 11. Two approximations to the exact singularity curves—using different knots’ grid sizes, δ = 2 and δ = 1.5 .
Axioms 13 00428 g011
Figure 12. Three-corner discontinuity—noisy data and approximation.
Figure 12. Three-corner discontinuity—noisy data and approximation.
Axioms 13 00428 g012
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Levin, D. Constructing Approximations to Bivariate Piecewise-Smooth Functions. Axioms 2024, 13, 428. https://doi.org/10.3390/axioms13070428

AMA Style

Levin D. Constructing Approximations to Bivariate Piecewise-Smooth Functions. Axioms. 2024; 13(7):428. https://doi.org/10.3390/axioms13070428

Chicago/Turabian Style

Levin, David. 2024. "Constructing Approximations to Bivariate Piecewise-Smooth Functions" Axioms 13, no. 7: 428. https://doi.org/10.3390/axioms13070428

APA Style

Levin, D. (2024). Constructing Approximations to Bivariate Piecewise-Smooth Functions. Axioms, 13(7), 428. https://doi.org/10.3390/axioms13070428

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop