As demonstrated in the 1D case, the non-linear space of functions defined by uniform splines, together with the simple operations min and max, may be used to approximate univariate piecewise-smooth continuous functions. In the bivariate case, we consider functions with derivative discontinuities or jump discontinuities across curves. The objectives of this section are fourfold:
3.1. Normals’ Discontinuity across Curves—Problem A
We start with a numerical demonstration of a direct extension of the univariate approach to the approximation of continuous piecewise-smooth bivariate functions. Recalling the 1D discussion, the choice of a min or a max operation depends on the sign of
. In the 2D case, we refer to an analogous condition involving the slopes of the graph along the singularity curves. A discontinuity (singularity) of the normals of a bivariate function
f is said to be convex along a curve
if the exterior angle of the graph of
f at every point along the curve is
(e.g., see
Figure 3), and it is considered to be concave if the exterior angles are
. In a neighborhood of a concave singularity (discontinuity) curve, the function may be described as the minimum between two (or more) smooth functions, and near a convex singularity curve the function may be defined as the maximum of two or more smooth functions. Let us consider the following noisy data,
, taken from a function with convex singularities. For the numerical experiment, we take
X as the set of data points on a square grid of mesh size
in
, and the provided noisy data are shown in
Figure 4. In this case, the function has a ‘3-corner’-type singularity, where
f has convex singularity along three curves meeting at a point. Therefore, we look for three spline functions,
, so that
where
solve the non-linear least-squares problem:
Within this example, we would also like to show how to blend two non-smooth approximations. Therefore, we consider the approximation problem on two partially overlapping sub-domains of and . After solving the approximation problem separately on each sub-domain, the two approximations will be blended into a global one. On each sub-domain, the unknown functions are chosen to be cubic spline functions with a square grid of knots of grid size . Here again, the triplet of functions that solve the minimization problem (8) are not unique. However, it turns out that the approximation to f is well-defined by (8); that is, the parts of that are relevant to are well-defined.
Let us first consider the approximation on the sub-domain
. For the particular data shown on the left plot in
Figure 5, the solution of (8) yields the piecewise-smooth approximation depicted on the right plot. In this plot, we see the full graphs of the three functions
(for this sub-domain), while the approximation is only the upper part (the maximal values) of these graphs. The solution to the optimization problem (8) has been found using a differential evolution procedure [
8]. As an initial guess for the three unknown functions, we take, as in the univariate case, the spline function that approximates the data over the whole domain
. Next, we look for the approximation on
, which partially overlaps
. The relevant data and the resulting approximation are shown in
Figure 6.
To achieve an approximation over the whole domain
, we now explain how to blend the two approximations defined on
and on
. The singularity curves of the two approximations do not necessarily overlap on
. Therefore, a direct blending of the two approximations will not provide a smooth transition of the singularity curve. The appropriate blending should be completed between the corresponding spline functions generating these singularity curves. On each sub-domain, the approximation is defined by another triplet of splines
. For the approximation over
, only two of the splines are active in the final max operation, and the graph of the third spline is below the maximum of the other two. To prepare for the blending step, we have to match appropriate pairs of both triplets, and this can easily be completed by proximity over the blending zone
. The final approximation over
D is defined by
, where
are defined by blending the appropriate pairs using the simplest
blending function. The resulting blended approximation over
D, to the data provided in
Figure 4, is displayed in
Figure 3.
3.2. Jump Discontinuity across a Curve—Problem B
Another interesting problem in bivariate approximation is the approximation of a function with a discontinuity across a curve. Consider the case of a function defined over a domain
D, with a discontinuity across a (simple) curve
, separating
D into two sub-domains,
and
. We assume that
and
are smooth on
and
, respectively. Such problems, and especially the problem of approximating
, appear in image segmentation. Efficient algorithms for constructing
that are useful even for more involved data are the method of snakes, or active contours, and the level-set method. The method of snakes, introduced in [
9], iteratively finds contours that approach the contour
separating two distinctive regions in an image, with applications to shape modeling [
10]. The level-set method, first suggested in [
11], is also an iterative method for approximating
using a variational formulation for minimizing appropriate energy functionals. More recent algorithms for the approximation of piecewise-smooth functions in two and three dimensions have been introduced in [
12], using data on a grid, and in [
13], for scattered data. A variational spline level-set approach has been suggested in [
14]. Here, the focus is on simultaneously approximating the curve
and the function on
and
. This goal is reflected in the cost functional used below, and, as demonstrated in
Section 3.5, we can also handle non-simple topologies of
, such as a three-corner discontinuity. The following procedure for treating a jump singularity comes as a natural extension of the framework for approximating a continuous function with derivative discontinuity, as suggested in
Section 3.2:
Again, we look for three spline functions,
, and
, such that the zero-level set
of
approximates the singularity curve
approximates
f on
, and
approximates
f on
. Formally, we would like to minimize the following objective function:
Note that the non-linearity of the minimization problem here, which we denote as Problem B, is due to the non-linear operation of sign checking. This approximation problem may seem to be more complicated than Problem A of the previous section, but, actually, it is somewhat simpler. While in problem A the unknown coefficients of all three splines appear in a non-linear form in the objective function
(due to the max operation), here, only the coefficients of
influence the value of
in a non-linear manner. This is due to the observation that, once
is known, the functions
and
that minimize
are defined via a linear system of equations. Given this observation, and for reasons that will be clarified below, we use a slight variation of the optimization problem. Namely, we look for a function
that minimizes
, where
and
are defined by the (linear) least-squares problem:
where
denotes the zero-level set of
is the ’mesh size’ in the data set
X, and
For non-noisy data, we would like to achieve an
approximation order to
and
on
and
, respectively. This can be obtained by using proper boundary conditions in the computation of
and
, e.g., by extending the data by local polynomial approximations. We thus consider a third version of the least-squares problem for
and
In (11) and is the provided data on and the extension of these data into , and is the provided data on and the extension of these data on . The extension operator should be exact for cubic polynomials.
Remark 2. Since may be defined up to a multiplying factor, we may restrict its unknown coefficients to lie in a compact bounded box, and thus the existence of a global minimizer in (9)–(11) is ensured.
Let us now describe a numerical experiment based on the above framework. The function we would like to approximate is defined on
, and it has a jump discontinuity across a sinusoidal-shaped curve. We may consider two types of noisy data. The first includes noise in the data values, and the second includes noise in the location of the singularity curve
. The three unknown functions
are again cubic spline functions with a square grid of knots of grid size
. However, the unknown parameters
p in
are just the coefficients of
. The other two spline functions are computed within the evaluation procedure of
by solving the linear system of equations for their coefficients, i.e., the system defined by the least-squares problem (10). The noisy data of the second type (noise in the location of
), and the resulting approximation obtained by minimizing (9), are displayed in
Figure 7 and
Figure 8.
For a function with a more involved shape of singularity curve, we would suggest subdividing the domain into patches, partially overlapping, and then blending the approximations over the individual patches into a global approximation. As in the blending suggested for Problem A, the blending of two approximations to jump discontinuities over partially overlapping patches and should be performed on the functions that generate the approximations on the different patches. Here, one should take care of the fact that the function is not uniquely defined by the optimization problem (9). Let us denote by and the functions generating the singularity curve on and , respectively. To achieve a nice blending of the two curves, we suggest scaling one of the two functions, say , so that on . It is important to match the two functions only on that part of that is close to the zero curves defined by and .
3.3. Problem B—Approximation Analysis
The approximation problem is as follows: consider a piecewise-smooth function f defined over a domain D, with a discontinuity across a simple, smooth curve , separating D into two open sub-domains and . We assume that and are smooth, with bounded derivatives of order four on and , respectively, and so is the curve . Let be a grid of data points of grid size h, and let us consider the approximations for Problem B using bi-cubic spline functions with knots on a grid of size . The classical result on approximation by least-squares by cubic splines implies an approximation order to a function with bounded derivatives of order four (provided there are enough data points for a well-posed solution). On the other hand, even in the univariate case, the location of a jump discontinuity in a piecewise-smooth function is inherently up to an error. Therefore, the best we can expect from a good approximation procedure for f such as above is the following:
Theorem 1. Consider Problem B on D and let be a bi-cubic spline function (with knots’ grid size ), which provides a local minimum to (9), with and defined by minimizing (11). Denote the segmentation defined by by and . For , and for h small enough, there exists such local minimizer such that if or then dist .
Proof. The theorem says that the zero-level set of , separates the data set X well into the two parts, and only data points that are very close to may appear in the wrong segment. To prove this result, we first observe that the curve can be approximated by the zero-level set of bi-cubic splines with approximation error . One such spline would be , the approximation to the signed distance function related to the curve . Fixing determines and , which minimize for this , and we denote the corresponding value . We note that the contribution to the value of is (as from a point that falls on the right side of , and it is from a point on the wrong side of . For a small enough h, only a small number of points will fall in the wrong side of , and any choice of that induces more points in the wrong side will induce a larger value of . The minimizing solution induces a value , and this can be achieved only by reducing the set of ’wrong side’ points. Since already defines an separation approximation, only points that are at distance from may stay on the wrong side in the local minimizer that evolves by a continuous change in , which reduces . □
Corollary 1. If the least-squares problems defining and by (10) are well-posed, we obtain Remark 3. The above well-posed condition can be checked while computing and . Also, an approximation order can be obtained by using proper boundary conditions in the computation of and , e.g., by extending the data by local polynomial approximations, as suggested in (11).
Remark 4. The need to restrict the set of data points defining and in (10) emerged given the condition needed for the proof of Theorem 1. As shown in the numerical example below, this restriction may be very important in practical applications.
3.5. Three-Corner Jump Discontinuity—Problem C
Combining elements from Problems A and B, we can now approach the more complex problem of a three-corner discontinuity. Consider the case of a function defined over a domain
D, with a discontinuity across three curves meeting at a three-corner discontinuity, subdividing
D into three sub-domains,
, and
, as in
Figure 12. We assume that
is smooth on
. Following the above discussions, the following procedure is suggested:
We look for three spline functions,
, approximating
f on
, respectively. Here, the approximation of the segmentation into three domains cannot be conducted via a zero-level set approach. Instead, we look for an additional triplet of spline functions,
, which define approximations
to
as follows:
Denoting
, we would like to minimize the following objective function:
Hence, the segmentation is defined by a max operation as in Problem A. Given a segmentation of
D into
, the triplet
is defined, as in Problem B, by a system of linear equations that defines the least-squares solution of (3.8). To achieve better approximation on
, in view of Theorem 1, the least-squares approximation for
should exclude data points that are near joint boundaries of
.
For a numerical illustration of Problem
and the approximation obtained by minimization of
, we took noisy data from a function with three-corner discontinuity in
. All the unknown spline functions,
and
, are bi-cubic with a square grid of knots of grid size
. Since only the splines
enter in a non-linear way into
, the minimization problem involves
unknowns. As in all the previous examples, we have used a differential evolution algorithm to find an approximate solution to this minimization problem. The noisy data and the resulting approximation are shown in
Figure 12.