**1. Introduction**

The Theory of Connections (ToC), as introduced in [1], consists of a general analytical framework to obtain *constrained expressions*, *f*(*x*), in one-dimension. A constrained expression is a function expressed in terms of another function, *g*(*x*), that is freely chosen and, no matter what the *g*(*x*) is, the resulting expression always satisfies a set of *n* constraints. ToC generalizes the one-dimensional interpolation problem subject to *n* constraints using the general form,

$$f(\mathbf{x}) = \mathbf{g}(\mathbf{x}) + \sum\_{k=1}^{n} \eta\_k \ p\_k(\mathbf{x}),\tag{1}$$

where *pk*(*x*) are *n* user-selected linearly independent functions, *ηk* are derived by imposing the *n* constraints, and *g*(*x*) is a *freely chosen* function subject to be *defined and nonsingular* where the constraints are specified. Besides this requirement, *g*(*x*) can be any function, including, discontinuous functions, delta functions, and even functions that are undefined in some domains. Once the *ηk* coefficients have been derived, then Equation (1) satisfies all the *n* constraints, *no matter what the g*(*x*) *function is*.

Constrained expressions in the form given in Equation (1) are provided for a wide class of constraints, including constraints on points and derivatives, linear combinations of constraints, as well as infinite and integral constraints [2]. In addition, weighted constraints [3] and point constraints on continuous and discontinuous periodic functions with assigned period can also be obtained [1]. How to extend ToC to inequality and nonlinear constraints is currently a work in progress.

The Theory of Connections framework can be considered the generalization of interpolation; rather than providing a class of functions (e.g., monomials) satisfying a set of *n* constraints, it derives *all* possible functions satisfying the *n* constraints by spanning all possible *g*(*x*) functions. This has been proved in Ref. [1]. A simple example of a constrained expression is,

$$f(\mathbf{x}) = \mathbf{g}(\mathbf{x}) + \frac{\mathbf{x}(2\mathbf{x}\_2 - \mathbf{x})}{2(\mathbf{x}\_2 - \mathbf{x}\_1)} \left[ \dot{\mathbf{y}}\_1 - \dot{\mathbf{g}}(\mathbf{x}\_1) \right] + \frac{\mathbf{x}(\mathbf{x} - 2\mathbf{x}\_1)}{2(\mathbf{x}\_2 - \mathbf{x}\_1)} \left[ \dot{\mathbf{y}}\_2 - \dot{\mathbf{g}}(\mathbf{x}\_2) \right]. \tag{2}$$

This equation always satisfies d*f* d*x* - - - - *x*1 = *y*˙1 and d*f* d*x* - - - - *x*2 = *y*˙2, as long as *g*˙(*<sup>x</sup>*1) and *g*˙(*<sup>x</sup>*2) are definedandnonsingular.Inotherwords,*theconstraintsareembeddedintotheconstrainedexpression*.

 Constrained expressions can be used to transform constrained optimization problems into unconstrained optimization problems. Using this approach, fast least-squares solutions of linear [4] and nonlinear [5] ODE have been obtained at machine error accuracy and with low (actually, very low) condition number. Direct comparisons of ToC versus MATLAB's ode45 [6] and Chebfun [7] have been performed on a small test of ODE with excellent results [4,5]. In particular, the ToC approach to solve ODE consists of a unified framework to solve IVP, BVP, and multi-value problems. The extension of differential equations subject to component constraints [8] has opened the possibility for ToC to solve *in real-time* a class of direct optimal control problems [9], where the constraints connect state and costate.

This study first extends the Theory of Connections to two-dimensions by providing, for rectangular domains, *all* surfaces that are subject to: (1) Dirichlet constraints; (2) Neumann constraints; and (3) any combination of Dirichlet and Neumann constraints. This theory is then generalized to the Multivariate Theory of Connections which provide in *n*-dimensional space all possible manifolds that satisfy boundary constraints on the value and boundary constraints on any-order derivative.

This article is structured as follows. First, it shows that the one-dimensional ToC can be used in two dimensions when the constraints (functions or derivatives) are provided along one axis only. This is a particular case, where the original univariate theory [1] can be applied with basically no modifications. Then, a two dimensional ToC version is developed for Dirichlet type boundary constraints. This theory is then extended to include Neumann and mixed type boundary constraints. Finally, the theory is extended to *n*-dimensions and to incorporate arbitrary-order derivative boundary constraints followed by a mathematical proof validating it.

#### **2. Manifold Constraints in One Axis, Only**

Consider the function, *f*(*x*), where *f* : R*n* → R1, subject to one constraint manifold along the *i*th variable, *xi*, that is, *f*(*x*)|*xi*=*<sup>v</sup>* = *c*(*x<sup>v</sup> i* ). For instance, in 3-D space, this can be the surface constraint, *f*(*<sup>x</sup>*, *y*, *<sup>z</sup>*)|*<sup>y</sup>*=*<sup>π</sup>* = *<sup>c</sup>*(*<sup>x</sup>*, *π*, *<sup>z</sup>*). *All manifolds* satisfying this constraint can be expressed using the additive form provided in Ref. [1],

$$f(\mathbf{x}) = \mathbf{g}(\mathbf{x}) + [\mathbf{c}(\mathbf{x}\_i^{\mathbf{p}}) - \mathbf{g}(\mathbf{x}\_i^{\mathbf{p}})],$$

where *g*(*x*) is a freely chosen function that must be defined and nonsingular at the constraint coordinates. When *m* manifold constraints are defined along the *xi*-axis, then the 1-D methodology [1] can be applied as it is. For instance, the constrained expression subject to *m* constraints along the *xi* variable evaluated at *xi* = *wk*, where *k* ∈ [1, *<sup>m</sup>*], that is, *f*(*x*)|*xi*=*wk* = *c*(*x wk i*), is,

$$f(\mathbf{x}) = \mathbf{g}(\mathbf{x}) + \sum\_{k=1}^{m} \left\{ \left[ \mathbf{c} \left( \mathbf{x}\_i^{w\_k} \right) - \mathbf{g} \left( \mathbf{x}\_i^{w\_k} \right) \right] \prod\_{j \neq k} \frac{\mathbf{x}\_i - w\_j}{w\_k - w\_j} \right\}. \tag{3}$$

Note that this equation coincides with the Waring interpolation form (better known as Lagrangian interpolation form) [10] if the free function vanishes, *g*(*x*) = 0.

#### *2.1. Example #1: Surface Subject to Four Function Constraints*

The first example is designed to show how to use Equation (3) with mixed, continuous, discontinuous, and multiple constraints. Consider the following four constraints,

$$\csc(\mathbf{x}, -2) = \sin(2\mathbf{x}), \quad \csc(\mathbf{x}, 0) = 3\cos\mathbf{x} \left[ (\mathbf{x} + 1)\text{mod}(2) \right], \quad \csc(\mathbf{x}, 1) = 9\csc^{-\mathbf{x}^2}, \quad \text{and} \quad \csc(\mathbf{x}, 3) = 1 - \mathbf{x} \left[ (\mathbf{x} + 3)\text{mod}(2) \right]$$

This example highlights that the constraints and free-function may be discontinuous by using the modular arithmetic function. The result is a surface that is continuous in *x* at some coordinates (at *y* = −2, 1, and 3) and discontinuous at *y* = 0. The surfaces shown in Figures 1 and 2 were obtained using two distinct expressions for the free function, *g*(*<sup>x</sup>*, *y*).

**Figure 1.** Surface obtained using function *g*(*<sup>x</sup>*, *y*) = 0 (simplest surface).

**Figure 2.** Surface obtained using function *g*(*<sup>x</sup>*, *y*) = *x*2 *y* − sin(<sup>5</sup>*x*) cos(<sup>4</sup> mod(*y*, <sup>1</sup>)).

*2.2. Example #2: Surface Subject to Two Functions and One Derivative Constraint*

This second example is provided to show how to use the general approach given in Equation (1) and described in [1], when derivative constraints are involved. Consider the following three constraints,

$$c(\mathbf{x}, -2) = \sin(2\mathbf{x}), \qquad c\_{\mathcal{Y}}(\mathbf{x}, 0) = 0, \qquad \text{and} \qquad c(\mathbf{x}, 1) = \theta \,\boldsymbol{\varepsilon}^{-\mathbf{x}^2}.$$

Using the functions *p*1(*y*) = 1, *p*2(*y*) = *y*, and *p*3(*y*) = *y*2, the constrained expression form satisfying these three constraints assumes the form,

$$f(\mathbf{x}, \mathbf{y}) = \mathbf{g}(\mathbf{x}, \mathbf{y}) + \eta\_1(\mathbf{x}) + \eta\_2(\mathbf{x})\ \mathbf{y} + \eta\_3(\mathbf{x})\ \mathbf{y}^2. \tag{4}$$

The three constraints imply the constraints,

$$\begin{aligned} \sin(2x) &=& \lg(\mathfrak{x}, -2) + \eta\_1 - 2\eta\_2 + 4\eta\_3 \\ 0 &=& \lg(\mathfrak{x}, 0) + \eta\_2 \\ 9\,\mathfrak{e}^{-\mathfrak{x}^2} &=& \lg(\mathfrak{x}, 1) + \eta\_1 + \eta\_2 + \eta\_3 \end{aligned}$$

from which the values of the *ηk* coefficients,

$$\begin{aligned} \eta\_1 &=& 2g\_y(\mathbf{x},0) + 12e^{-\mathbf{x}^2} - \frac{\sin(2\mathbf{x})}{3} + \frac{1}{3}\mathbf{g}(\mathbf{x},-2) - \frac{4}{3}\mathbf{g}(\mathbf{x},1) \\ \eta\_2 &=& -g\_y(\mathbf{x},0) \\ \eta\_3 &=& \frac{\sin(2\mathbf{x})}{3} - \frac{1}{3}\mathbf{g}(\mathbf{x},-2) - \mathbf{g}\_y(\mathbf{x},0) - 3e^{-\mathbf{x}^2} + \frac{1}{3}\mathbf{g}(\mathbf{x},1), \end{aligned}$$

can be derived. After substituting these coefficients into Equation (4), the constrained expression that always satisfies the three initial constraints is obtained. Using this expression and two different free functions, *g*(*<sup>x</sup>*, *y*), we obtained the surfaces shown in Figures 3 and 4, respectively. The constraint *cy*(*<sup>x</sup>*, 0) = 0, difficult to see in both figures, can be verified analytically.

**Figure 3.** Surface obtained using function *g*(*<sup>x</sup>*, *y*) = 0 (simplest surface).

**Figure 4.** Surface obtained using function *g*(*<sup>x</sup>*, *y*) = <sup>3</sup>*x*2*y* − 2 sin(<sup>15</sup>*x*) cos(<sup>2</sup>*y*).

#### **3. Connecting Functions in Two Directions**

In this section, the Theory of Connections is extended to the two-dimensional case. Note that dealing with constraints in two (or more) directions (functions or derivatives) requires particular attention. In fact, two orthogonal constraint functions cannot be completely distinct as they intersect at one point where they need to match in value. In addition, if the formalism derived for the 1-D case is applied to 2-D case, some complications arise. These complications are highlighted in the following simple clarifying example.

Consider the two boundary constraint functions, *f*(*<sup>x</sup>*, 0) = *q*(*x*) and *f*(0, *y*) = *h*(*y*). Searching the constrained expression as originally done for the one-dimensional case implies the expression,

$$f(\mathbf{x}, \mathbf{y}) = \mathcal{g}(\mathbf{x}, \mathbf{y}) + \eta\_1 \, p\_1(\mathbf{x}, \mathbf{y}) + \eta\_2 \, p\_2(\mathbf{x}, \mathbf{y}) \, .$$

The constraints imply the two constraints,

$$\begin{cases} \begin{aligned} q(\mathbf{x}) &= \mathbf{g}(\mathbf{x}, \mathbf{0}) + \eta\_1 \ p\_1(\mathbf{x}, \mathbf{0}) + \eta\_2 \ p\_2(\mathbf{x}, \mathbf{0}) \\ h(\mathbf{y}) &= \mathbf{g}(\mathbf{0}, \mathbf{y}) + \eta\_1 \ p\_1(\mathbf{0}, \mathbf{y}) + \eta\_2 \ p\_2(\mathbf{0}, \mathbf{y}) \end{aligned} \end{cases}$$

To obtain the values of *η*1 and *η*2, the determinant of the matrix to invert is *p*1(*<sup>x</sup>*, 0) *p*2(0, *y*) − *p*1(0, *y*) *p*2(*<sup>x</sup>*, <sup>0</sup>). This determinant is *y* by selecting *p*1(*<sup>x</sup>*, *y*) = 1 and *p*2(*<sup>x</sup>*, *y*) = *y*, or it is *x* by selecting *p*1(*<sup>x</sup>*, *y*) = *x* and *p*2(*<sup>x</sup>*, *y*) = 1. Therefore, to avoid singularities, this approach requires paying particular attention to the domain definition and/or on the user-selected functions, *pk*(*<sup>x</sup>*, *y*). To avoid dealing with these issues, a new (equivalent) formalism to derive constrained expressions is devised for the higher dimensional case.

The Theory of Connections extension to the higher dimensional case (with constraints on all axes) can be obtained by re-writing the constrained expression into an equivalent form, highlighting a general and interesting property. Let us show this by an example. Equation (2) can be re-written as,

$$f(\mathbf{x}) = \underbrace{\frac{\mathbf{x}(2\mathbf{x}\_2 - \mathbf{x})}{2(\mathbf{x}\_2 - \mathbf{x}\_1)}\dot{y}\_1 + \frac{\mathbf{x}(\mathbf{x} - 2\mathbf{x}\_1)}{2(\mathbf{x}\_2 - \mathbf{x}\_1)}\dot{y}\_2}\_{A(\mathbf{x})} + \underbrace{\mathbf{g}(\mathbf{x}) - \frac{\mathbf{x}(2\mathbf{x}\_2 - \mathbf{x})}{2(\mathbf{x}\_2 - \mathbf{x}\_1)}\dot{y}\_1 - \frac{\mathbf{x}(\mathbf{x} - 2\mathbf{x}\_1)}{2(\mathbf{x}\_2 - \mathbf{x}\_1)}\dot{y}\_2}\_{B(\mathbf{x})}.\tag{5}$$

These two components, *<sup>A</sup>*(*x*) and *<sup>B</sup>*(*x*), of a constrained expression have a specific general meaning. The term, *<sup>A</sup>*(*x*), represents an (*any*) interpolating function satisfying the constraints while the *<sup>B</sup>*(*x*) term represents *all* interpolating functions that are vanishing at the constraints. Therefore, the generation of all functions satisfying multiple orthogonal constraints in *n*-dimensional space can always be expressed by the general form, *f*(*x*) = *<sup>A</sup>*(*x*) + *<sup>B</sup>*(*x*), where *<sup>A</sup>*(*x*) is *any* function satisfying the constraints and *<sup>B</sup>*(*x*) must represent *all* functions vanishing at the constraints. Equation *f*(*x*) = *<sup>A</sup>*(*x*) + *<sup>B</sup>*(*x*) is actually an alternative general form to write a *constrained expression*, that is, an alternative way to generalize interpolation: rather than derive a class of functions (e.g., monomials) satisfying a set of constraints, it represents *all* possible functions satisfying the set of constraints.

To prove that this additive formalism can describe *all* possible functions satisfying the constraints is immediate. Let *f*(*x*) be all functions satisfying the constraints and *y*(*x*) = *<sup>A</sup>*(*x*) + *<sup>B</sup>*(*x*) be the sum of a specific function satisfying the constraints, *<sup>A</sup>*(*x*), and a function, *<sup>B</sup>*(*x*), representing all functions that are null at the constraints. Then, *y*(*x*) will be equal to *f*(*x*) *iff <sup>B</sup>*(*x*) = *f*(*x*) − *<sup>A</sup>*(*x*), representing all functions that are null at the constraints.

As shown in Equation (5), once the *<sup>A</sup>*(*x*) function is obtained, then the *<sup>B</sup>*(*x*) function can be immediately derived. In fact, *<sup>B</sup>*(*x*) can be obtained by subtracting the *<sup>A</sup>*(*x*) function, where all the constraints are specified in terms of the *g*(*x*) free function, from the free function *g*(*x*). For this reason, let us write the general expression of a constrained expression as,

$$f(\mathbf{x}) = A(\mathbf{x}) + \mathcal{g}(\mathbf{x}) - A(\mathcal{g}(\mathbf{x})),\tag{6}$$

where *<sup>A</sup>*(*g*(*x*)) indicates the function satisfying the constraints where the constraints are specified in term of *g*(*x*).

The previous discussion serves to prove that the problem of extending Theory of Connections to higher dimensional spaces consists of the problem of finding the function, *<sup>A</sup>*(*x*), only. In two dimensions, the function *<sup>A</sup>*(*x*) is provided in literature by the Coons surface [11], *f*(*<sup>x</sup>*, *y*). This surface satisfies the Dirichlet boundary constraints,

$$f(0, y) = \varepsilon(0, y), \quad f(1, y) = \varepsilon(1, y), \quad f(\mathbf{x}, 0) = \varepsilon(\mathbf{x}, 0), \quad \text{and} \quad f(\mathbf{x}, 1) = \varepsilon(\mathbf{x}, 1), \tag{7}$$

where the surface is contained in the *x*, *y* ∈ [0, 1] × [0, 1] domain. This surface is used in computer graphics and in computational mechanics applications to smoothly join other surfaces together, particularly in finite element method and boundary element method, to mesh problem domains into elements. The expression of the Coons surface is,

$$\begin{aligned} f(\mathbf{x},y) &= (1-\mathbf{x})\boldsymbol{\varepsilon}(0,y) + \mathbf{x}\boldsymbol{\varepsilon}(1,y) + (1-y)\boldsymbol{\varepsilon}(\mathbf{x},0) + y\boldsymbol{\varepsilon}(\mathbf{x},1) - \mathbf{x}\,y\boldsymbol{\varepsilon}(1,1) \\ &- (1-\mathbf{x})(1-y)\boldsymbol{\varepsilon}(0,0) - (1-\mathbf{x})\,y\boldsymbol{\varepsilon}(0,1) - \mathbf{x}\,(1-y)\,\boldsymbol{\varepsilon}(1,0), \end{aligned}$$

where the four subtracting terms are there for continuity. Note the constraint functions at boundary corners must have the same value, *c*(0, <sup>0</sup>), *c*(0, <sup>1</sup>), *c*(1, <sup>0</sup>), and *c*(1, <sup>1</sup>). This equation can be written in matrix form as,

$$f(\mathbf{x}, y) = \begin{Bmatrix} 1, & 1 - \mathbf{x}, & \mathbf{x} \end{Bmatrix} \begin{bmatrix} 0 & c(\mathbf{x}, 0) & c(\mathbf{x}, 1) \\ c(0, y) & -c(0, 0) & -c(0, 1) \\ c(1, y) & -c(1, 0) & -c(1, 1) \end{bmatrix} \begin{Bmatrix} 1 \\ 1 - y \\ y \end{Bmatrix},$$

or, equivalently,

$$f(\mathbf{x}, y) = \mathfrak{v}^{\mathsf{r}}(\mathbf{x}) \, \mathcal{M}(\mathfrak{c}(\mathbf{x}, y)) \, \mathfrak{v}(y), \tag{8}$$

where

$$\mathcal{M}(c(x,y)) = \begin{bmatrix} 0 & c(x,0) & c(x,1) \\ c(0,y) & -c(0,0) & -c(0,1) \\ c(1,y) & -c(1,0) & -c(1,1) \end{bmatrix} \qquad \text{and}\qquad \mathbf{v}(z) = \begin{Bmatrix} 1 \\ 1-z \\ z \end{Bmatrix}.$$

Since the *f*(*<sup>x</sup>*, *y*) boundaries match the boundaries of the *<sup>c</sup>*(*<sup>x</sup>*, *y*) constraint function, then the identity, *f*(*<sup>x</sup>*, *y*) = *<sup>v</sup>*<sup>T</sup>(*x*)M(*f*(*<sup>x</sup>*, *y*)) *<sup>v</sup>*(*y*), holds for *any f*(*<sup>x</sup>*, *y*) function. Therefore, the *<sup>B</sup>*(*x*) function can be set as,

$$B(\mathbf{x}) := \mathcal{g}(\mathbf{x}, \mathcal{y}) - \boldsymbol{\sigma}^{\mathsf{r}}(\mathbf{x}) \, \mathcal{M}(\mathcal{g}(\mathbf{x}, \mathcal{y})) \, \boldsymbol{\sigma}(\mathcal{y}), \tag{9}$$

representing all functions that are always zero at the boundary constraints, as *g*(*<sup>x</sup>*, *y*) is a free function.

#### **4. Theory of Connections Surface Subject to Dirichlet Constraints**

Equations (8) and (9) can be merged to provide *all surfaces* with the boundary constraints defined in Equation (7) in the following compact form,

$$f(\mathbf{x}, y) = \underbrace{\boldsymbol{\sigma}^{\mathrm{r}}(\mathbf{x})\mathcal{M}(\mathbf{c}(\mathbf{x}, y))\boldsymbol{\nu}(y)}\_{A(\mathbf{x}, y)} + \underbrace{\boldsymbol{g}(\mathbf{x}, y) - \boldsymbol{\nu}^{\mathrm{r}}(\mathbf{x})\mathcal{M}(g(\mathbf{x}, y))\boldsymbol{\nu}(y)}\_{B(\mathbf{x}, y)}.\tag{10}$$

where, again, *<sup>A</sup>*(*<sup>x</sup>*, *y*) indicates an expression satisfying the boundary function constraints defined by *<sup>c</sup>*(*<sup>x</sup>*, *y*) and *<sup>B</sup>*(*<sup>x</sup>*, *y*) an expression that is zero at the boundaries. In matrix form, Equation (10) becomes,

$$f(\mathbf{x},y) = \begin{Bmatrix} 1\\1-\mathbf{x} \\ \mathbf{x} \end{Bmatrix}^{\top} \begin{bmatrix} g(\mathbf{x},y) & c(\mathbf{x},0) - g(\mathbf{x},0) & c(\mathbf{x},1) - g(\mathbf{x},1) \\ c(0,y) - g(0,y) & g(0,0) - c(0,0) & g(0,1) - c(0,1) \\ c(1,y) - g(1,y) & g(1,0) - c(1,0) & g(1,1) - c(1,1) \end{Bmatrix} \begin{Bmatrix} 1\\1-y\\ y \end{Bmatrix},$$

where *g*(*<sup>x</sup>*, *y*) is a freely chosen function. In particular, if *g*(*<sup>x</sup>*, *y*) = 0, then the ToC surface becomes the Coons surface.

Figure 5 (left) shows the Coons surface subject to the constraints,

$$\begin{aligned} c(x,0) &= \sin(3x - \pi/4) \cos(\pi/3) \\ c(x,1) &= \sin(3x - \pi/4) \cos(4 + \pi/3) \\ c(0,y) &= \sin(-\pi/4) \cos(4y + \pi/3) \\ c(1,y) &= \sin(3 - \pi/4) \cos(4y + \pi/3) \end{aligned}$$

and Figure 5 (right) shows a ToC surface that is obtained using the free function,

$$\log(\mathbf{x}, y) = \frac{1}{3}\cos(4\pi\mathbf{x})\sin(6\pi y) - \mathbf{x}^2\cos(2\pi y). \tag{11}$$

**Figure 5.** Coons surface (**left**); and ToC surface (**right**) using *g*(*<sup>x</sup>*, *y*) provided in Equation (11).

For generic boundaries defined in the rectangle *x*, *y* ∈ [*xi*, *xf* ] × [*yi*, *yf* ], the ToC surface becomes,

*f*(*<sup>x</sup>*, *y*) = *g*(*<sup>x</sup>*, *y*) + *x* − *xf xi* − *xf* [*c*(*xi*, *y*) − *g*(*xi*, *y*)] + *x* − *xi xf* − *xi c*(*xf* , *y*) − *g*(*xf* , *y*) + *y* − *yf yi* − *yf* [*c*(*<sup>x</sup>*, *yi*) − *g*(*<sup>x</sup>*, *yi*)] + *y* − *yi yf* − *yi <sup>c</sup>*(*<sup>x</sup>*, *yf*) − *g*(*<sup>x</sup>*, *yf*) − (*x* − *xf*)(*y* − *yf*) (*xi* − *xf*)(*yi* − *yf*) [*c*(*xi*, *yi*) − *g*(*xi*, *yi*)] − (*x* − *xf*)(*y* − *yi*) (*xi* − *xf*)(*yf* − *yi*) *<sup>c</sup>*(*xi*, *yf*) − *g*(*xi*, *yf*) − (*x* − *xi*)(*y* − *yf*) (*xf* − *xi*)(*yi* − *yf*) *c*(*xf* , *yi*) − *g*(*xf* , *yi*) − (*x* − *xi*)(*y* − *yi*) (*xf* − *xi*)(*yf* − *yi*) *c*(*xf* , *yf*) − *g*(*xf* , *yf*) . (12)

Equation (12) can also be set in matrix form,

$$f(\mathbf{x}, y) = \boldsymbol{\sigma}\_{\mathbf{x}}^{\top}(\mathbf{x}, \mathbf{x}\_{i\prime} \mathbf{x}\_{f}) \; \mathcal{M}(\mathbf{x}, y) \; \boldsymbol{\sigma}\_{\mathbf{y}}(y, y\_{i\prime} y\_{f}),$$

where

$$\mathcal{M}(\mathbf{x},\mathbf{y}) = \begin{bmatrix} \mathbf{g}(\mathbf{x},\mathbf{y}) & \mathbf{c}(\mathbf{x},\mathbf{y}\_i) - \mathbf{g}(\mathbf{x},\mathbf{y}\_i) & \mathbf{c}(\mathbf{x},\mathbf{y}\_f) - \mathbf{g}(\mathbf{x},\mathbf{y}\_f) \\ \mathbf{c}(\mathbf{x}\_i,\mathbf{y}) - \mathbf{g}(\mathbf{x}\_i,\mathbf{y}) & \mathbf{g}(\mathbf{x}\_i,\mathbf{y}\_i) - \mathbf{c}(\mathbf{x}\_i,\mathbf{y}\_i) & \mathbf{g}(\mathbf{x}\_i,\mathbf{y}\_f) - \mathbf{c}(\mathbf{x}\_i,\mathbf{y}\_f) \\ \mathbf{c}(\mathbf{x}\_f,\mathbf{y}) - \mathbf{g}(\mathbf{x}\_f,\mathbf{y}) & \mathbf{g}(\mathbf{x}\_f,\mathbf{y}\_i) - \mathbf{c}(\mathbf{x}\_f,\mathbf{y}\_i) & \mathbf{g}(\mathbf{x}\_f,\mathbf{y}\_f) - \mathbf{c}(\mathbf{x}\_f,\mathbf{y}\_f) \end{bmatrix}$$

⎥

⎪⎪⎪⎪⎪⎪⎪⎪⎬

and

$$w\_x(\mathbf{x}, \mathbf{x}\_i, \mathbf{x}\_f) = \begin{Bmatrix} 1 \\ \frac{\mathbf{x} - \mathbf{x}\_f}{\mathbf{x}\_i - \mathbf{x}\_f} \\ \frac{\mathbf{x} - \mathbf{x}\_i}{\mathbf{x}\_f - \mathbf{x}\_i} \end{Bmatrix} \qquad \text{and} \qquad w\_y(y, y\_i, y\_f) = \begin{Bmatrix} 1 \\ \frac{y - y\_f}{y\_i - y\_f} \\ \frac{y - y\_i}{y\_f - y\_i} \end{Bmatrix} \cdot \mathbf{x}$$

Note that all the ToC surfaces provided are linear in *g*(*<sup>x</sup>*, *y*), and, therefore, they can be used to solve, by linear/nonlinear least-squares, two-dimensional optimization problems subject to boundary function constraints, such as linear/nonlinear partial differential equations.

#### **5. Multi-Function Constraints at Generic Coordinates**

Equation (12) can be generalized to many function constraints (grid of functions). Assume a set of *nx* function constraints *<sup>c</sup>*(*xk*, *y*) and a set of *ny* function constraints *<sup>c</sup>*(*<sup>x</sup>*, *yk*) intersecting at the *nx ny* points *pij* = *<sup>c</sup>*(*xi*, *yj*), then all surfaces satisfying the *nx ny* function constraints can be expressed by,

$$\begin{split} f(\mathbf{x}, \mathbf{y}) &= g(\mathbf{x}, \mathbf{y}) + \sum\_{k=1}^{n\_x} \left[ \mathbf{c}(\mathbf{x}\_k, \mathbf{y}) - g(\mathbf{x}\_k, \mathbf{y}) \right] \prod\_{i \neq k} \frac{\mathbf{x} - \mathbf{x}\_i}{\mathbf{x}\_k - \mathbf{x}\_i} \\ &+ \sum\_{k=1}^{n\_y} \left[ \mathbf{c}(\mathbf{x}, \mathbf{y}\_k) - g(\mathbf{x}, \mathbf{y}\_k) \right] \prod\_{i \neq k} \frac{\mathbf{y} - \mathbf{y}\_i}{\mathbf{y}\_k - \mathbf{y}\_i} \\ &- \sum\_{i=1}^{n\_x} \left\{ \sum\_{j=1}^{n\_y} \frac{(\mathbf{x} - \mathbf{x}\_j)(\mathbf{y} - \mathbf{y}\_i)}{(\mathbf{x}\_i - \mathbf{x}\_j)(\mathbf{y}\_j - \mathbf{y}\_i)} \left[ \mathbf{c}(\mathbf{x}\_i, \mathbf{y}\_j) - \mathbf{g}(\mathbf{x}\_i, \mathbf{y}\_j) \right] \right\}. \end{split} \tag{13}$$

Again, Equation (13) can be written in compact form,

$$f(\mathbf{x}, \mathbf{y}) = \mathbf{z}^{\mathsf{r}}(\mathbf{x}) \mathcal{M}(\mathbf{c}(\mathbf{x}, \mathbf{y})) \mathbf{z}(\mathbf{y}) + \mathbf{g}(\mathbf{x}, \mathbf{y}) - \mathbf{z}^{\mathsf{r}}(\mathbf{x}) \mathcal{M}(\mathbf{g}(\mathbf{x}, \mathbf{y})) \mathbf{z}(\mathbf{y})$$

where,

$$\sigma(\mathbf{x}) = \begin{Bmatrix} 1 \\ \prod\_{i \neq 1} \frac{\mathbf{x} - \mathbf{x}\_i}{\mathbf{x}\_1 - \mathbf{x}\_i} \\ \vdots \\ \prod\_{i \neq n\_x} \frac{\mathbf{x} - \mathbf{x}\_i}{\mathbf{x}\_{n\_x} - \mathbf{x}\_i} \end{Bmatrix} \qquad \text{and} \qquad \sigma(\mathbf{y}) = \begin{Bmatrix} 1 \\ \prod\_{i \neq 1} \frac{\mathbf{y} - \mathbf{y}\_i}{\mathbf{y}\_1 - \mathbf{y}\_i} \\ \vdots \\ \prod\_{i \neq n\_y} \frac{\mathbf{y} - \mathbf{y}\_i}{\mathbf{y}\_{n\_y} - \mathbf{y}\_i} \end{Bmatrix}$$

and

$$\mathcal{M}(c(\mathbf{x},\boldsymbol{y})) = \begin{bmatrix} 0 & c(\mathbf{x},\boldsymbol{y}\_1) & \dots & c(\mathbf{x},\boldsymbol{y}\_{n\_{\mathcal{Y}}}) \\ c(\mathbf{x}\_1,\boldsymbol{y}) & -c(\mathbf{x}\_1,\boldsymbol{y}\_1) & \dots & -c(\mathbf{x}\_1,\boldsymbol{y}\_{N\_{\mathcal{Y}}}) \\ \vdots & \vdots & \ddots & \vdots \\ c(\mathbf{x}\_{n\_{\mathcal{X}}},\boldsymbol{y}) & -c(\mathbf{x}\_{n\_{\mathcal{X}}},\boldsymbol{y}\_1) & \dots & -c(\mathbf{x}\_{n\_{\mathcal{X}}},\boldsymbol{y}\_{n\_{\mathcal{Y}}}) \end{bmatrix}$$

For example, two function constraints in *x* and three function constraints in *y* can be obtained using the matrix,

$$
\mathcal{M}(c(\mathbf{x}, y)) = \begin{bmatrix}
0 & c(\mathbf{x}, y\_1) & c(\mathbf{x}, y\_2) & c(\mathbf{x}, y\_3) \\
c(\mathbf{x}\_1, y) & -c(\mathbf{x}\_1, y\_1) & -c(\mathbf{x}\_1, y\_2) & -c(\mathbf{x}\_1, y\_3) \\
c(\mathbf{x}\_2, y) & -c(\mathbf{x}\_2, y\_1) & -c(\mathbf{x}\_2, y\_2) & -c(\mathbf{x}\_2, y\_3)
\end{bmatrix}
$$

and the vectors,

$$\sigma(x) = \begin{cases} 1\\ \frac{x - x\_2}{x\_1 - x\_2} \\ \frac{x - x\_1}{x\_2 - x\_1} \end{cases} \quad \text{and} \qquad \sigma(y) = \begin{cases} 1\\ \frac{(y - y\_2)(y - y\_3)}{(y\_1 - y\_2)(y\_1 - y\_3)} \\ \frac{(y - y\_1)(y - y\_3)}{(y\_2 - y\_1)(y\_2 - y\_3)} \\ \frac{(y - y\_1)(y\_2 - y\_3)}{(y\_3 - y\_2)(y\_3 - y\_1)} \\ \frac{(y\_3 - y\_2)(y\_3 - y\_1)}{(y\_3 - y\_2)(y\_3 - y\_1)} \end{cases}.$$

Two examples of ToC surfaces are given in Figure 6 in the *x*, *y* ∈ [−2, 1] × [1, 3] domain.

**Figure 6.** ToC surface subject to multiple constraints on two axes: using *g*(*<sup>x</sup>*, *y*) = 0 (**left**); and using *g*(*<sup>x</sup>*, *y*) = mod(*<sup>x</sup>*, 0.5) cos(<sup>19</sup>*y*) − *x* mod(<sup>3</sup>*y*, 0.4) (**right**).

#### **6. Constraints on Function and Derivatives**

The "Boolean sum formulation" was provided by Farin [12] (also called "Hermite–Coons formulation") of the Coons surface that includes boundary derivatives,

$$f(\mathbf{x}, \mathbf{y}) = \mathbf{v}^{\mathbf{r}}(\mathbf{y})\mathbf{F}^{\mathbf{x}}(\mathbf{x}) + \mathbf{v}^{\mathbf{r}}(\mathbf{x})\mathbf{F}^{\mathbf{y}}(\mathbf{y}) - \mathbf{v}^{\mathbf{r}}(\mathbf{x})M^{\mathbf{x}\mathbf{y}}\mathbf{v}(\mathbf{y})\tag{14}$$

where

$$\begin{aligned} \mathbf{v}(z) &:= \{2z^3 - 3z^2 + 1, \quad z^3 - 2z^2 + z, \quad -2z^3 + 3z^2, \quad z^3 - z^2\}^\top \\ \mathbf{F}^x(\mathbf{x}) &:= \{c(\mathbf{x}, 0), \quad c\_y(\mathbf{x}, 0), \quad c(\mathbf{x}, 1), \quad c\_y(\mathbf{x}, 1)\}^\top \\ \mathbf{F}^y(y) &:= \{c(0, y), \quad c\_x(0, y), \quad c(1, y), \quad c\_x(1, y)\}^\top \end{aligned}$$

and

$$M^{xy}(\mathbf{x},\boldsymbol{y}) := \begin{bmatrix} c(0,0) & c\_{\mathcal{Y}}(0,0) & c(0,1) & c\_{\mathcal{Y}}(0,1) \\ c\_{\mathcal{X}}(0,0) & c\_{\mathcal{xy}}(0,0) & c\_{\mathcal{X}}(0,1) & c\_{\mathcal{xy}}(0,1) \\ c(1,0) & c\_{\mathcal{Y}}(1,0) & c(1,1) & c\_{\mathcal{Y}}(1,1) \\ c\_{\mathcal{X}}(1,0) & c\_{\mathcal{xy}}(1,0) & c\_{\mathcal{X}}(1,1) & c\_{\mathcal{X}}(1,1) \end{bmatrix}$$

The formulation provided in Equation (14) can be put in the matrix compact form,

$$f(\mathbf{x}, \mathbf{y}) = \mathbf{z}^{\top}(\mathbf{x}) \, \mathcal{M}(\mathbf{c}(\mathbf{x}, \mathbf{y})) \, \mathbf{z}(\mathbf{y}), \tag{15}$$

.

where

$$\sigma(z) := \{1, \quad 2z^3 - 3z^2 + 1, \quad z^3 - 2z^2 + z, \quad -2z^3 + 3z^2, \quad z^3 - z^2\}^\top \tag{16}$$

and the 5 × 5 matrix, M(*c*(*<sup>x</sup>*, *y*)), has the expression,

$$\mathcal{M}(c(\mathbf{x},y)) := \begin{bmatrix} 0 & c(\mathbf{x},0) & c\_y(\mathbf{x},0) & c(\mathbf{x},1) & c\_y(\mathbf{x},1) \\ c(0,y) & -c(0,0) & -c\_y(0,0) & -c(0,1) & -c\_y(0,1) \\ c\_x(0,y) & -c\_x(0,0) & -c\_{xy}(0,0) & -c\_x(0,1) & -c\_{xy}(0,1) \\ c(1,y) & -c(1,0) & -c\_y(1,0) & -c(1,1) & -c\_y(1,1) \\ c\_x(1,y) & -c\_x(1,0) & -c\_{xy}(1,0) & -c\_x(1,1) & -c\_{xy}(1,1) \end{bmatrix} . \tag{17}$$

To verify the boundary derivative constraints, the following partial derivatives of Equation (15) are used,

$$\begin{aligned} f\_{\mathbf{x}}(\mathbf{x},\mathbf{y}) &= [\mathbf{v}\_{\mathbf{x}}^{\mathrm{r}}(\mathbf{x})\mathcal{M}(\mathbf{c}(\mathbf{x},\mathbf{y})) + \mathbf{v}^{\mathrm{r}}(\mathbf{x})\mathcal{M}\_{\mathbf{x}}(\mathbf{c}(\mathbf{x},\mathbf{y}))] \mathbf{v}(\mathbf{y}) \\ f\_{\mathbf{y}}(\mathbf{x},\mathbf{y}) &= \mathbf{v}^{\mathrm{r}}(\mathbf{x})[\mathcal{M}\_{\mathbf{y}}^{\mathrm{r}}(\mathbf{c}(\mathbf{x},\mathbf{y}))\mathbf{v}(\mathbf{y}) + \mathcal{M}(\mathbf{c}(\mathbf{x},\mathbf{y}))\mathbf{v}\_{\mathbf{y}}(\mathbf{y})] \end{aligned}$$

where

$$
\frac{\mathbf{d}\mathbf{v}}{\mathbf{d}z} = \begin{Bmatrix} 0\\ 6z(z-1) \\ 3z^2 - 4z + 1 \\ 6z(1-z) \\ z(3z-2) \end{Bmatrix}, \quad \mathcal{M}\_{\mathcal{Y}} = \begin{bmatrix} 0 & \mathbf{0}\_{1\times 4} \\ c\_{\mathcal{Y}}(0,y) & \mathbf{0}\_{1\times 4} \\ c\_{\mathcal{X}\mathcal{Y}}(0,y) & \mathbf{0}\_{1\times 4} \\ c\_{\mathcal{Y}}(1,y) & \mathbf{0}\_{1\times 4} \\ c\_{\mathcal{X}\mathcal{Y}}(1,y) & \mathbf{0}\_{1\times 4} \end{bmatrix}, \quad \text{and} \quad \mathcal{M}\_{\mathcal{X}}^{\mathbb{T}} = \begin{bmatrix} 0 & \mathbf{0}\_{1\times 4} \\ c\_{\mathcal{X}}(x,0) & \mathbf{0}\_{1\times 4} \\ c\_{\mathcal{X}}(x,1) & \mathbf{0}\_{1\times 4} \\ c\_{\mathcal{X}}(x,1) & \mathbf{0}\_{1\times 4} \end{bmatrix}.
$$

The ToC in 2D with function and derivative boundary constraints is simply,

$$f(\mathbf{x}, y) = \underbrace{\mathbf{v}^{\intercal}(\mathbf{x})\mathcal{M}(\mathbf{c}(\mathbf{x}, y))\mathbf{v}(y)}\_{A(\mathbf{x}, y)} + \underbrace{\mathbf{g}(\mathbf{x}, y) - \mathbf{v}^{\intercal}(\mathbf{x})\mathcal{M}(\mathbf{g}(\mathbf{x}, y))\mathbf{v}(y)}\_{B(\mathbf{x}, y)}\tag{18}$$

where the M matrix and the *v* vectors are provided by Equations (17) and (16), respectively.

Dirichlet/Neumann mixed constraints can be derived, as shown in the examples provided in Sections 6.1–6.4. The matrix compact form is simply obtained from the matrix defined in Equation (17) by removing the rows and the columns associated with the boundary constraints not provided, while the vectors *v*(*x*) and *v*(*y*) are derived by specifying the constraints. Note that in general the vectors *v*(*x*) and *v*(*y*) are *not* unique. The reason the vectors *v*(*x*) and *v*(*y*) are not unique comes from the fact that the *<sup>A</sup>*(*x*) term in Equation (6) is not unique.

In the next subsections, four Dirichlet/Neumann mixed constraint examples providing the simplest expressions for *v*(*x*) and *v*(*y*) are derived. The Appendix A contains the expressions for the *v*(*x*) and *v*(*y*) vectors associated with all the combinations of Dirichlet and Neumann constraints.

#### *6.1. Constraints: c*(0, *y*) *and <sup>c</sup>*(*<sup>x</sup>*, 0)

In this case, the Coons-type surface satisfying the boundary constraints can be expressed as,

$$f(\mathbf{x}, y) = \begin{Bmatrix} 1 & p(\mathbf{x}) \end{Bmatrix} \begin{bmatrix} 0 & c(\mathbf{x}, 0) \\ c(0, y) & -c(0, 0) \end{bmatrix} \begin{Bmatrix} 1 \\ q(y) \end{Bmatrix}$$

where *p*(*x*) and *q*(*y*) are unknown functions. Expanding, we obtain *f*(*<sup>x</sup>*, *y*) = *<sup>c</sup>*(*<sup>x</sup>*, <sup>0</sup>)*q*(*y*) + *p*(*x*)[*c*(0, *y*) − *c*(0, <sup>0</sup>)*q*(*y*)]. The two constraints are satisfied if,

$$\begin{aligned} \mathcal{c}(0,y) &= \mathcal{c}(0,0)q(y) + p(0)[\mathcal{c}(0,y) - \mathcal{c}(0,0)q(y)],\\ \mathcal{c}(\mathbf{x},0) &= \mathcal{c}(\mathbf{x},0)q(0) + p(\mathbf{x})[\mathcal{c}(0,0) - \mathcal{c}(0,0)q(0)].\end{aligned}$$

Therefore, the *p*(*x*) and *q*(*y*) functions must satisfy *p*(0) = 1 and *q*(0) = 1. The simplest expressions satisfying these equations can be obtained by selecting *p*(*x*) = 1 and *q*(*y*) = 1. In this case, the associated ToC surface is given by,

$$f(\mathbf{x}, \mathbf{y}) = \begin{Bmatrix} \mathbf{1} & \mathbf{1} \end{Bmatrix} \begin{bmatrix} \mathbf{g}(\mathbf{x}, \mathbf{y}) & c(\mathbf{x}, \mathbf{0}) - \mathbf{g}(\mathbf{x}, \mathbf{0}) \\ c(\mathbf{0}, \mathbf{y}) - \mathbf{g}(\mathbf{0}, \mathbf{y}) & \mathbf{g}(\mathbf{0}, \mathbf{0}) - c(\mathbf{0}, \mathbf{0}) \end{bmatrix} \begin{Bmatrix} \mathbf{1} \\ \mathbf{1} \end{Bmatrix}$$

Note that any functions satisfying *p*(0) = 1 and *q*(0) = 1 can be adopted to obtain the ToC surface satisfying the constraints *f*(0, *y*) = *c*(0, *y*) and *f*(*<sup>x</sup>*, 0) = *<sup>c</sup>*(*<sup>x</sup>*, <sup>0</sup>). This is because there are infinite Coons-type surfaces satisfying the constraints. Consequently, the vectors *v*(*x*) and *v*(*y*) are not unique.

#### *6.2. Constraints: c*(0, *y*) *and cy*(*<sup>x</sup>*, 0)

For these boundary constraints, the Coons-type surface is expressed by,

$$\begin{aligned} f(x,y) &= \begin{Bmatrix} 1 & p(x) \end{Bmatrix} \begin{bmatrix} 0 & c\_{\mathcal{Y}}(x,0) \\ c(0,y) & -c\_{\mathcal{Y}}(0,0) \end{bmatrix} \begin{Bmatrix} 1 \\ q(y) \end{Bmatrix} \\ &= c\_{\mathcal{Y}}(x,0)q(y) + p(x)[c(0,y) - c\_{\mathcal{Y}}(0,0)q(y)]. \end{aligned}$$

The constraints are satisfied if,

$$\begin{aligned} c(0, y) &= c\_y(0, 0)q(y) + p(0)[c(0, y) - c\_y(0, 0)q(y)], \\ c\_y(x, 0) &= c\_{\mathcal{Y}}(x, 0)q\_{\mathcal{Y}}(0) + p(x)[c\_{\mathcal{Y}}(0, 0) - c\_{\mathcal{Y}}(0, 0)q\_{\mathcal{Y}}(0)]. \end{aligned}$$

Therefore, the *p*(*x*) and *q*(*y*) functions must satisfy *p*(0) = 1 and *qy*(0) = 1. One solution is *p*(*x*) = 1 and *q*(*y*) = *y*. Therefore, the associated ToC surface is given by,

$$f(\mathbf{x}, y) = \begin{Bmatrix} \mathbf{1} & \mathbf{1} \end{Bmatrix} \begin{bmatrix} \mathbf{g}(\mathbf{x}, y) & \mathbf{c}\_{\mathcal{Y}}(\mathbf{x}, \mathbf{0}) - \mathbf{g}\_{\mathcal{Y}}(\mathbf{x}, \mathbf{0}) \\ \mathbf{c}(\mathbf{0}, y) - \mathbf{g}(\mathbf{0}, y) & \mathbf{g}\_{\mathcal{Y}}(\mathbf{0}, \mathbf{0}) - \mathbf{c}\_{\mathcal{Y}}(\mathbf{0}, \mathbf{0}) \end{bmatrix} \begin{Bmatrix} \mathbf{1} \\ \mathbf{y} \end{Bmatrix}.$$

#### *6.3. Neumann Constraints: cx*(0, *y*)*, cx*(1, *y*)*, cy*(*<sup>x</sup>*, <sup>0</sup>)*, and cy*(*<sup>x</sup>*, 1)

In this case, the Coons-type surface satisfying the boundary constraints can be expressed as,

$$f(\mathbf{x},\mathbf{y}) = \begin{Bmatrix} 1, & p\_1(\mathbf{x}), & p\_2(\mathbf{x}) \end{Bmatrix} \begin{bmatrix} 0 & c\_y(\mathbf{x},0) & c\_y(\mathbf{x},1) \\ c\_x(0,y) & -c\_{xy}(0,0) & -c\_{xy}(0,1) \\ c\_x(1,y) & -c\_{xy}(1,0) & -c\_{xy}(1,1) \end{bmatrix} \begin{Bmatrix} 1 \\ q\_1(y) \\ q\_2(y) \end{Bmatrix} .$$

The constraints are satisfied if,

$$\begin{aligned} c\_{x}(0,y)&=q\_{1}(y)c\_{xy}(0,0)+q\_{2}(y)c\_{xy}(0,1)+\\ &+p\_{1x}(0)[c\_{x}(0,y)-q\_{1}(y)c\_{xy}(0,0)-q\_{2}(y)c\_{xy}(0,1)]+\\ &+p\_{2x}(0)[c\_{x}(1,y)-q\_{1}(y)c\_{xy}(1,0)-q\_{2}(y)c\_{xy}(1,1)]\\ c\_{x}(1,y)&=q\_{1}(y)c\_{xy}(1,0)+q\_{2}(y)c\_{xy}(1,1)+\\ &+p\_{1x}(1)[c\_{x}(0,y)-q\_{1}(y)c\_{xy}(0,0)-q\_{2}(y)c\_{xy}(0,1)]+\\ &+p\_{2x}(1)[c\_{x}(1,y)-q\_{1}(y)c\_{xy}(1,0)-q\_{2}(y)c\_{xy}(1,1)]\\ c\_{y}(x,0)&=q\_{1y}(0)c\_{y}(x,0)+q\_{2y}(0)c\_{y}(x,1)+\\ &+p\_{1}(x)[c\_{xy}(0,0)-q\_{1y}(0)c\_{xy}(0,0)-q\_{2y}(0)c\_{xy}(0,1)]+\\ &+p\_{2}(x)[c\_{xy}(1,0)-q\_{1y}(0)c\_{xy}(1,0)-q\_{2y}(0)c\_{xy}(1,1)]\\ c\_{y}(x,1)&=q\_{1y}(1)c\_{y}(x,0)+q\_{2y}(1)c\_{y}(x,1)+\\ &+p\_{1}(x)[c\_{xy}(0,1)-q\_{1y}(1)c\_{xy}(0,0)-q\_{2y}(1)c\_{xy}(0,1)]+\\ &+p\_{2}(x)[c\_{xy}(1,1)-q\_{1y}(1)c\_{xy}(1,0)-q\_{2y}(1)c\_{xy}(1,1)] \end{aligned}$$

These equations imply *p*<sup>1</sup>*x*(0) = *q*<sup>1</sup>*x*(0) = 1, *p*<sup>1</sup>*x*(1) = *q*<sup>1</sup>*x*(1) = 0, *p*<sup>2</sup>*x*(0) = *q*<sup>2</sup>*x*(0) = 0, and *p*<sup>2</sup>*x*(1) = *q*<sup>2</sup>*x*(1) = 1. Therefore, the simplest solution is *p*1(*t*) = *q*1(*t*) = *t* − *t*2/2 and *p*2(*t*) = *q*2(*t*) = *t*2/2. Then, the associated ToC surface satisfying the Neumann constraints is given by,

$$f(\mathbf{x},\boldsymbol{y}) = \boldsymbol{\sigma}^{\mathbb{T}}(\mathbf{x}) \begin{bmatrix} \operatorname{g}(\mathbf{x},\boldsymbol{y}) & \operatorname{c}\_{\mathcal{Y}}(\mathbf{x},0) - \operatorname{g}\_{\mathcal{Y}}(\mathbf{x},0) & \operatorname{c}\_{\mathcal{Y}}(\mathbf{x},1) - \operatorname{g}\_{\mathcal{Y}}(\mathbf{x},1) \\ \operatorname{c}\_{\mathcal{X}}(\mathbf{0},\boldsymbol{y}) - \operatorname{g}\_{\mathcal{X}}(\mathbf{0},\boldsymbol{y}) & \operatorname{g}\_{\mathcal{X}}(\mathbf{0},\boldsymbol{0}) - \operatorname{c}\_{\mathcal{X}\mathcal{Y}}(\mathbf{0},\boldsymbol{0}) & \operatorname{g}\_{\mathcal{X}\mathcal{Y}}(\mathbf{0},1) - \operatorname{c}\_{\mathcal{X}\mathcal{Y}}(\mathbf{0},1) \\ \operatorname{c}\_{\mathcal{X}}(\mathbf{1},\boldsymbol{y}) - \operatorname{g}\_{\mathcal{X}}(\mathbf{1},\boldsymbol{y}) & \operatorname{g}\_{\mathcal{X}\mathcal{Y}}(\mathbf{1},\boldsymbol{0}) - \operatorname{c}\_{\mathcal{X}\mathcal{Y}}(\mathbf{1},\boldsymbol{0}) & \operatorname{g}\_{\mathcal{X}\mathcal{Y}}(\mathbf{1},\boldsymbol{1}) - \operatorname{c}\_{\mathcal{X}\mathcal{Y}}(\mathbf{1},\boldsymbol{1}) \end{bmatrix} \boldsymbol{\sigma}(\mathbf{y}) $$

where

$$\mathbf{v}^{\mathbf{r}}(\mathbf{x}) = \begin{Bmatrix} 1, & \mathbf{x} - \frac{\mathbf{x}^2}{2}, & \frac{\mathbf{x}^2}{2} \end{Bmatrix} \qquad \text{and} \qquad \mathbf{v}(y) = \begin{Bmatrix} 1, & y - \frac{y^2}{2}, & \frac{y^2}{2} \end{Bmatrix} \cdot \mathbf{v}$$

*6.4. Constraints: c*(0, *y*)*, cy*(*<sup>x</sup>*, <sup>0</sup>)*, and cy*(*<sup>x</sup>*, 1)

> In this case, the Coons-type surface satisfying the boundary constraints is in the form,

$$f(\mathbf{x}, \mathbf{y}) = \begin{Bmatrix} 1 \\ p(\mathbf{x}) \end{Bmatrix}^{\mathrm{r}} \begin{bmatrix} 0 & c\_{\mathcal{Y}}(\mathbf{x}, \mathbf{0}) & c\_{\mathcal{Y}}(\mathbf{x}, \mathbf{1}) \\ c(\mathbf{0}, \mathbf{y}) & -c\_{\mathcal{Y}}(\mathbf{0}, \mathbf{0}) & -c\_{\mathcal{Y}}(\mathbf{0}, \mathbf{1}) \end{Bmatrix} \left\{ \begin{matrix} 1 \\ q\_{1}(\mathbf{y}) \\ q\_{2}(\mathbf{y}) \end{matrix} \right\}.$$

The constraints are satisfied if *p*(0) = 1, *<sup>p</sup>*<sup>1</sup>*y*(0) = 1, *<sup>p</sup>*<sup>1</sup>*y*(1) = 0, *<sup>p</sup>*<sup>2</sup>*y*(0) = 0, and *<sup>p</sup>*<sup>2</sup>*y*(1) = 1. Therefore, the associated ToC surface is,

$$f(\mathbf{x},\mathbf{y}) = \begin{Bmatrix} 1\\1 \end{Bmatrix}^{\top} \begin{bmatrix} g(\mathbf{x},\mathbf{y}) & c\_{\mathcal{Y}}(\mathbf{x},0) - g\_{\mathcal{Y}}(\mathbf{x},0) & c\_{\mathcal{Y}}(\mathbf{x},1) - g\_{\mathcal{Y}}(\mathbf{x},1) \\ c(0,\mathbf{y}) - g(0,\mathbf{y}) & g\_{\mathcal{Y}}(0,0) - c\_{\mathcal{Y}}(0,0) & g\_{\mathcal{Y}}(0,1) - c\_{\mathcal{Y}}(0,1) \end{Bmatrix} \left\{ \begin{Bmatrix} 1\\y - \frac{y^{2}}{2} \\ \frac{y^{2}}{2} \end{Bmatrix} \right\}.$$

#### *6.5. Generic Mixed Constraints*

Consider the case of mixed constraints,

$$\begin{array}{ll} f(\mathbf{x}, y\_1) = c(\mathbf{x}, y\_1) & f\_y(\mathbf{x}\_1, y) = c\_y(\mathbf{x}\_1, y) \\ f\_x(\mathbf{x}, y\_2) = c\_x(\mathbf{x}, y\_2) & \text{and} \\ f(\mathbf{x}, y\_3) = c(\mathbf{x}, y\_3) & f(\mathbf{x}\_3, y) = c(\mathbf{x}\_3, y) \end{array} \tag{19}$$

In this case, the surface satisfying the boundary constraints is built using the matrix,

$$
\mathcal{M}(c(\mathbf{x},y)) = \begin{bmatrix}
0 & c(\mathbf{x},y\_1) & c\_{\mathbf{x}}(\mathbf{x},y\_2) & c(\mathbf{x},y\_3) \\
c\_{\mathcal{Y}}(\mathbf{x}\_1,y) & -c\_{\mathcal{Y}}(\mathbf{x}\_1,y\_1) & -c\_{\mathbf{x}\mathcal{Y}}(\mathbf{x}\_1,y\_2) & -c\_{\mathbf{y}}(\mathbf{x}\_1,y\_3) \\
c\_{\mathcal{Y}}(\mathbf{x}\_2,y) & -c\_{\mathcal{Y}}(\mathbf{x}\_2,y\_1) & -c\_{\mathbf{x}\mathcal{Y}}(\mathbf{x}\_2,y\_2) & -c\_{\mathcal{Y}}(\mathbf{x}\_2,y\_3) \\
c(\mathbf{x}\_3,y) & -c(\mathbf{x}\_3,y\_1) & -c\_{\mathbf{x}}(\mathbf{x}\_3,y\_2) & -c(\mathbf{x}\_3,y\_3)
\end{bmatrix}
$$

and all surfaces subject to the constraints defined in Equation (19) can be obtained by,

$$f(\mathbf{x}, \mathbf{y}) = \mathbf{z}(\mathbf{x})^\mathsf{r} \mathcal{M}(\mathbf{c}(\mathbf{x}, \mathbf{y})) \mathbf{z}(\mathbf{y}) + \mathcal{g}(\mathbf{x}, \mathbf{y}) - \mathbf{z}(\mathbf{x})^\mathsf{r} \mathcal{M}(\mathcal{g}(\mathbf{x}, \mathbf{y})) \mathbf{z}(\mathbf{y}),$$

where

$$\boldsymbol{\sigma}(\mathbf{x}) = \begin{cases} 1 \\ p\_1(\mathbf{x}, \mathbf{x}\_1, \mathbf{x}\_2, \mathbf{x}\_3) \\ p\_2(\mathbf{x}, \mathbf{x}\_1, \mathbf{x}\_2, \mathbf{x}\_3) \\ p\_3(\mathbf{x}, \mathbf{x}\_1, \mathbf{x}\_2, \mathbf{x}\_3) \end{cases} \quad \text{and} \qquad \boldsymbol{\sigma}(\mathbf{y}) = \begin{cases} 1 \\ q\_1(\mathbf{y}, \mathbf{y}\_1, \mathbf{y}\_2, \mathbf{y}\_3) \\ q\_2(\mathbf{y}, \mathbf{y}\_1, \mathbf{y}\_2, \mathbf{y}\_3) \\ q\_3(\mathbf{y}, \mathbf{y}\_1, \mathbf{y}\_2, \mathbf{y}\_3) \end{cases}$$

are vectors made of the (not unique) function vectors *v*(*x*) and *v*(*y*) whose expressions can be found by satisfying the constraints (as done in the previous four subsections) along with a methodology similar to that given in Section 5.

#### **7. Extension to** *n***-Dimensional Spaces and Arbitrary-Order Derivative Constraints**

This section provides the *Multivariate Theory of Connections*, as the generalization to *n*-dimensional rectangular domains with arbitrary-order boundary derivatives of what is presented above for two-dimensional space. Using tensor notation, this generalization is represented in the following compact form,

$$F(\mathbf{x}) = \underbrace{\mathcal{M}(\mathbf{c}(\mathbf{x}))\_{i\_1 i\_2 \dots i\_n} \mathbf{v}\_{i\_1} \mathbf{v}\_{i\_2} \dots \mathbf{v}\_{i\_n}}\_{A(\mathbf{x})} + \underbrace{\mathcal{g}(\mathbf{x}) - \mathcal{M}(g(\mathbf{x}))\_{i\_1 i\_2 \dots i\_n} \mathbf{v}\_{i\_1} \mathbf{v}\_{i\_2} \dots \mathbf{v}\_{i\_n}}\_{B(\mathbf{x})}\tag{20}$$

where *n* is the number of orthogonal coordinates defined by the vector *x* = {*<sup>x</sup>*1, *x*2,..., *xn*}, *<sup>v</sup>ik* (*xk*) is the *ik*th element of a vector function of the variable *xk*, M is an *n*-dimensional tensor that is a function of the boundary constraints defined in *<sup>c</sup>*(*x*), and *g*(*x*) is the free-function.

In Equation (20), the term *<sup>A</sup>*(*x*) represents any function satisfying the boundary constraints defined by *c*(*x*) and the term *<sup>B</sup>*(*x*) represents all possible functions that are zero on the boundary constraints. The subsections that follow explain how to construct the M tensor and the *<sup>v</sup>ik* vectors for assigned boundary constraints, and provides a proof that the tensor formulation of the ToC defined by Equation (20) satisfies all boundary constraints defined by *<sup>c</sup>*(*x*), independently of the choice of the free function, *g*(*x*).

Consider a generic boundary constraint on the *xk* = *p* hyperplane, where *k* ∈ [1, *n*]. This constraint specifies the *d*-derivative of the constraint function *c*(*x*) evaluated at *xk* = *p* and it is indicated by *kcdp* := *<sup>∂</sup>dc*(*x*) *<sup>∂</sup>xdk*----*xk*=*p*. Consider a set of *k* constraints defined in various *xk* hyperplanes. This set of

constraints is indicated by *kcd<sup>k</sup> pk* , where *dk* and *pk* are vectors of *k* elements indicating the order of derivatives and the values of *xk* where the boundary constraints are defined, respectively. A specific boundary constraint, e.g. the *m*th boundary constraint, can then be written as *kc<sup>d</sup>kmpkm*.

Additionally, let us define an operator, called the boundary constraint operator, whose purpose is to take the *d*th derivative with respect to coordinate *xk* and then evaluate that function at *xk* = *p*. Equation (21) shows the idea.

$$\mathfrak{d}^{\mathbb{A}}\mathfrak{b}\_{p}^{d}[f] \equiv \frac{\mathfrak{d}^{d}f}{\mathfrak{d}\mathfrak{x}\_{k}^{d}}\Big|\_{\left(\mathfrak{x}\_{1},\ldots,\mathfrak{x}\_{k-1},p,\mathfrak{x}\_{k+1},\ldots,\mathfrak{x}\_{n}\right)}\tag{21}$$

In general, for a function of *n* variables, the boundary constraint operator identifies an *n* − 1-dimensional manifold. As the boundary constraint operator is used throughout this proof, it is important to note its properties when acting on sums and products of functions. Equation (22) shows how the boundary constraint operator acts on sums, and Equation (23) shows how the boundary constraint operator acts on products.

$${}^{k}b\_{p}^{d}[f\_{1}+f\_{2}] = {}^{k}b\_{p}^{d}[f\_{1}] + {}^{k}b\_{p}^{d}[f\_{2}] \tag{22}$$

$${}^{k}b\_{p}^{d}[f\_{1}f\_{2}] = \begin{cases} {}^{k}b\_{p}^{d}[f\_{1}] {}^{k}b\_{p}^{d}[f\_{2}], & d=0\\ {}^{k}b\_{p}^{d}[f\_{1}]f\_{2} + f\_{1} {}^{k}b\_{p}^{d}[f\_{2}], & d>0 \end{cases} \tag{23}$$

This section shows how to build the M tensor and the vectors *v* given the boundary constraints defined by the boundary constraint operators. Moreover, this section contains a proof that, in Equation (20), the boundary constraints defined by *c*(*x*) satisfy the function *<sup>A</sup>*(*x*) and, by extension, the function *<sup>B</sup>*(*x*) projects the free-function *g*(*x*) onto the sub-space of functions that are zero on the boundary constraints. Then, it follows that the expression for the ToC surface given in Equation (20) represents *all* possible functions that meet the boundary defined by the boundary constraint operators.

#### *7.1. The* M *Tensor*

There is a step-by-step method for constructing the M tensor.


$$\mathcal{M}\_{1,\dots,1,i\_k,1,\dots,1} = \,^k \mathfrak{c}\_{p^{k\prime}}^{d^k} \qquad \text{where} \quad i\_k \in [2, \ell\_k + 1],$$

where the vector *kcd<sup>k</sup> pk* contains the *k* boundary constraints specified along the *xk*-axis. For example, consider the following -7 = 3 constraints on the *k* = 7th axis,

$$\begin{aligned} \;^7c\_{p^7}^{d^7} := \left\{ c|\_{x\_{\mathcal{T}} = -0.3 \prime} \quad \frac{\partial^4 c}{\partial x\_{\mathcal{T}}^4} \bigg|\_{x\_{\mathcal{T}} = 0.5} \quad \frac{\partial c}{\partial x \gamma} \bigg|\_{x \mathcal{T}} \right\} \qquad \text{then} : \begin{cases} d^{\mathcal{T}} = \{0, 4, 1\} \\ p^{\mathcal{T}} = \{-0.3, 0.5, 1.1\}. \end{cases} \end{aligned}$$

3. The generic element of the tensor is M*<sup>i</sup>*1*i*2...*in* , where at least two indices are different from 1. Let *m* be the number of indices different from 1. Note that *m* is also the number of constraint "intersections". In this case, the generic element of the M tensor is provided by,

$$\mathcal{M}\_{i\_1 i\_2 \dots i\_n} = \,^1 b \Big|\_{p\_{i\_1 - 1}^1}^{d\_{i\_1 - 1}^1} \left[ \,^2 b \Big|\_{p\_{i\_2 - 1}^2}^{d\_{i\_2 - 1}^2} \left[ \dots \left[ \,^n b \Big|\_{p\_{i\_n - 1}^n}^{d\_{i\_n - 1}^n} [c(\mathbf{x})] \right] \dots \right] \right] (-1)^{m + 1}. \tag{24}$$

If *c*(*x*) ∈ *Cs*, where *s* = *n* ∑ *k*=1 *dk ik*−1, then Clairaut's theorem states that the sequence of boundary constraint operators provided in Equation (24) can be freely permutated. This permutation becomes obvious by multiple applications of the theorem. For example,

$$f\_{xyy} = (f\_{xy})\_y = (f\_{yx})\_y = (f\_y)\_{xy} = (f\_y)\_{yx} = f\_{yyx}.$$

To better clarify how to use Equation (24), consider the example of the following constraints in three-dimensional space.

$$c(\mathbf{x})|\_{\mathbf{x}\_1=0\prime} \quad c(\mathbf{x})|\_{\mathbf{x}\_1=1\prime} \quad c(\mathbf{x})|\_{\mathbf{x}\_2=0\prime} \quad \frac{\partial c(\mathbf{x})}{\partial \mathbf{x}\_2}\Big|\_{\mathbf{x}\_2=0} \quad c(\mathbf{x})|\_{\mathbf{x}\_3=0\prime} \quad \text{and} \quad \frac{\partial c(\mathbf{x})}{\partial \mathbf{x}\_3}\Big|\_{\mathbf{x}\_3=0}$$


$$\begin{aligned} M\_{i\_1 11} &= \left\{ 0, \quad \boldsymbol{c}(0, \mathbf{x}\_2, \mathbf{x}\_3), \quad \boldsymbol{c}(1, \mathbf{x}\_2, \mathbf{x}\_3) \right\} = \left\{ 0, \quad \, ^1b\_0^0[\boldsymbol{c}(\mathbf{x})], \quad ^1b\_1^0[\boldsymbol{c}(\mathbf{x})] \right\} \\ M\_{1 i\_2 1} &= \left\{ 0, \quad \boldsymbol{c}(\mathbf{x}\_1, 0, \mathbf{x}\_3), \quad \frac{\partial \boldsymbol{c}}{\partial \mathbf{x}\_2}(\mathbf{x}\_1, 0, \mathbf{x}\_3) \right\} = \left\{ 0, \quad ^2b\_0^0[\boldsymbol{c}(\mathbf{x})], \quad ^2b\_1^1[\boldsymbol{c}(\mathbf{x})] \right\} \\ M\_{1 1 i\_3} &= \left\{ 0, \quad \boldsymbol{c}(\mathbf{x}\_1, \mathbf{x}\_3, 0), \quad \frac{\partial \boldsymbol{c}}{\partial \mathbf{x}\_3}(\mathbf{x}\_1, \mathbf{x}\_2, 0) \right\} = \left\{ 0, \quad \, ^3b\_0^0[\boldsymbol{c}(\mathbf{x})], \quad ^3b\_0^1[\boldsymbol{c}(\mathbf{x})] \right\} \end{aligned}$$

3. From Step 3, a single example is provided,

$$\mathcal{M}\_{323} = \,^1b\_1^0 \left[ \,^2b\_0^0 \left[ \,^3c\_0^1(\mathbf{x}) \right] \right] (-1)^4 = \frac{\partial c(\mathbf{x})}{\partial \mathbf{x}\_3} \Big|\_{\substack{x\_1=1\\x\_3=0}}$$

which, thanks to Clairaut's theorem, can also be written as,

$$\mathcal{M}\_{323} = \,^2b\_0^0 \left[ \,^3b\_0^1 \left[ \,^1c\_1^0 \right] \right] (-1)^4 = \,^3b\_0^1 \left[ \,^1b\_1^0 \left[ \,^2c\_0^0 \right] \right] (-1)^4 .$$

Three additional examples are given to help further illustrate the procedure,

$$M\_{132} = -\left. \frac{\partial c(\mathbf{x})}{\partial \mathbf{x}\_2} \right|\_{\substack{\mathbf{x}\_2 = 0\\ \mathbf{x}\_3 = 0}} \qquad M\_{221} = -c(0, 0, \mathbf{x}\_3), \qquad \text{and} \qquad M\_{333} = \left. \frac{\partial^2 c(\mathbf{x})}{\partial \mathbf{x}\_2 \partial \mathbf{x}\_3} \right|\_{\substack{\mathbf{x}\_2 = 0\\ \mathbf{x}\_3 = 0}}$$

#### *7.2. The* **v** *Vectors*

Each vector, *vk*, is associated with the *k* constraints that are specified by *kcd<sup>k</sup> pk* . The *vk* vector is built as follows,

$$
\varpi\_k = \left\{ 1, \quad \sum\_{i=1}^{\ell\_k} \alpha\_{i1} \, h\_i(\mathbf{x}\_k), \quad \sum\_{i=1}^{\ell\_k} \alpha\_{i2} \, h\_i(\mathbf{x}\_k), \quad \dots, \quad \sum\_{i=1}^{\ell\_k} \alpha\_{i\ell\_k} \, h\_i(\mathbf{x}\_k) \right\},
$$

where *hi*(*xk*) are *k* linearly independent functions. The simplest set of linearly independent functions are monomials, that is, *hi*(*xk*) = *xi*−<sup>1</sup> *k* . The *k* × *k* coefficients, *<sup>α</sup>ij*, can be computed by matrix inversion,

$$
\begin{bmatrix}
{}^{k}b\_{p\_1^1}^{d\_1}[h\_1] & {}^{k}b\_{p\_1^1}^{d\_1}[h\_2] & \dots & {}^{k}b\_{p\_1^1}^{d\_1}[h\_{\ell\_k}] \\
{}^{k}b\_{p\_2^2}^{d\_2}[h\_1] & {}^{k}b\_{p\_2^2}^{d\_2}[h\_2] & \dots & {}^{k}b\_{p\_2^2}^{d\_2}[h\_{\ell\_k}] \\
\vdots & \vdots & \ddots & \vdots \\
{}^{k}b\_{p\_{\ell\_k}}^{d\_{\ell\_k}}[h\_1] & {}^{k}b\_{p\_{\ell\_k}}^{d\_{\ell\_k}}[h\_2] & \dots & {}^{k}b\_{p\_{\ell\_k}^{d\_k}}^{d\_{\ell\_k}}[h\_{\ell\_k}] \\
\end{bmatrix}
\begin{bmatrix}
a\_{11} & a\_{12} & \dots & a\_{1\ell\_k} \\
a\_{21} & a\_{22} & \dots & a\_{2\ell\_k} \\
\vdots & \vdots & \ddots & \vdots \\
a\_{\ell\_k 1} & a\_{\ell\_k 2} & \dots & a\_{\ell\_k \ell\_k}
\end{bmatrix} = \begin{bmatrix}
1 & 0 & \dots & 0 \\
0 & 1 & \dots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \dots & 1
\end{bmatrix}.\tag{25}
$$

To supplement the above explanation, let us look at the example of Dirichlet boundary conditions on *x*1 from the example in Section 7.1. There are two boundary conditions, *<sup>c</sup>*(*x*)|*<sup>x</sup>*1=<sup>0</sup> and *<sup>c</sup>*(*x*)|*<sup>x</sup>*1=1, and thus two linearly independent functions are needed,

$$
\sigma\_{i\_1} = \left\{ 1, \quad \alpha\_{11} h\_1(\mathbf{x}\_1) + \alpha\_{21} h\_2(\mathbf{x}\_1), \quad \alpha\_{12} h\_1(\mathbf{x}\_1) + \alpha\_{22} h\_2(\mathbf{x}\_1) \right\}.
$$

Let us consider, *h*1(*<sup>x</sup>*1) = 1 and *h*2(*<sup>x</sup>*1) = *x*1. Then, following Equation (25),

$$
\begin{bmatrix} 1\_0^0[1] & 1\_0^1[x] \\ 2\_1^0[1] & 2\_1^0[x] \end{bmatrix} \begin{bmatrix} a\_{11} & a\_{12} \\ a\_{21} & a\_{22} \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 1 & 1 \end{bmatrix} \begin{bmatrix} a\_{11} & a\_{12} \\ a\_{21} & a\_{22} \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \quad \rightarrow \quad \begin{bmatrix} a\_{11} & a\_{12} \\ a\_{21} & a\_{22} \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ -1 & 1 \end{bmatrix},
$$

and substituting the values of *<sup>α</sup>ij*, we obtain *<sup>v</sup>i*1 = 1, 1 − *x*1, *<sup>x</sup>*1.
