**1. Introduction**

The Hilbert transform in its original form is an integral transform given by

$$H(f, t) = \int\_{-\infty}^{\infty} \frac{f(x)}{x - t} \, dx, \qquad \qquad t \in \mathbb{R}. \tag{1}$$

Alongside this form there are different variants defining Hilbert transforms on a finite interval, on the torus, or discrete groups. Objects of our studies are the finite Hilbert transform and its derivative, namely the Hadamard transform, both defined on the finite (standard) interval [−1, 1]. They are given by

$$\mathcal{H}(f,t) = \oint\_{-1}^{1} \frac{f(\mathbf{x})}{\mathbf{x} - t} d\mathbf{x}, \quad \mathcal{H}^1(f,t) = \oint\_{-1}^{1} \frac{f(\mathbf{x})}{(\mathbf{x} - t)^2} d\mathbf{x}, \qquad -1 < t < 1,\tag{2}$$

where the single and double bar-integral notation indicate that the involved integrals have to be understood as the Cauchy principal value and the Hadamard finite-part integrals, respectively.

The interest in integrals of this kind is due to their wide use to formulate boundary-value problems in many areas of mathematical physics (potential theory, fracture mechanics, aerodynamics, elasticity, etc...) in terms of singular integral equations in [−1, 1] involving such integrals (see e.g., [1–5] and the references therein).

In fact, the Hilbert transform in its aforementioned form (1) and all its relatives appear in various fields in mathematical analysis, signal processing, physics and other fields in science. Among them are partial differential equations, optics (X-ray crystallography, electron-atom scattering), electrodynamics and quantum mechanics (Kramers–Kronig relation), signal processing (phase retrieval, transfer functions of linear systems, spectral factorization). We will not go into details here but instead refer to the comprehensive two volume treatise by F. King [6] on many aspects of the Hilbert transform and its various variants. Due to its outstanding relevance it is of grea<sup>t</sup> importance to possess procedures which allow computation of the Hilbert transform numerically with high degree of accuracy. This problem was studied by many authors for all the different variants of the Hilbert transform and under different assumptions. We limit the citation here to only a few interesting papers [7–9].

Our focus in the present paper lies on the numerical computation of the finite Hilbert and Hadamard transforms (2). There is an extensive literature on numerical methods for these transforms. We only mention here [5,10] and the references therein. Many of these methods produce a high degree of approximation, especially when the smoothness of *f* increases (see e.g., [11–15]). Since they are based on Gaussian quadrature rules and its modified versions or product rules, they require the values of *f* to be given at the zeros of Jacobi polynomials which often is not the case. For example, in many applications the measurements of *f* are produced by devices which sample the function on equidistant knots. Other procedures which pay attention to this fact and which are frequently used in applications involve composite quadrature rules on equally spaced points. However, this type of quadrature rules suffers from a low degree of approximation or show saturation phenomena. Hence there is a need to establish a new approach which combines the advantages of both the aforementioned methods.

To move towards this goal, we propose some quadrature rules obtained by means of the sequence {*Bm*,*<sup>s</sup> f* }*m* of the so-called *generalized Bernstein polynomials*, defined as iterated Boolean sums of the classical Bernstein polynomials {*Bm f* }*m* ([16–18]). These types of formulas are based on equally spaced knots in the interval [−1, 1] and their convergence order increases with increasing smoothness of the function, in contrast to various popular rules based on piecewise polynomial approximation. Moreover, there exists a numerical evidence showing that the speed of convergence of the formula increases for higher values of the parameter *s* and for fixed *m* (see [19], Remark 4.1).

Concerning the numerical computation of the Hilbert transform H(*f*), we revisit the method introduced in [20] from both the theoretical and computational point of view. Indeed, here, according to a more recent result obtained in [21], we estimate the quadrature error in terms of the more refined *k*th *ϕ*-modulus of Ditzian and Totik, instead of the ordinary modulus of smoothness. As consequence, we ge<sup>t</sup> error estimates in Sobolev and Hölder Zygmund spaces and we are able to state the maximum rate of convergence for functions in such spaces. The second improvement of the method in [20] regards the computation of the quadrature weights that is performed in a more stable way. It is based on a recurrence relation which does not require the transformation to the canonical bases {1, *x*, ... , *<sup>x</sup><sup>m</sup>*}, but it preserves the fundamental Bernstein polynomials {*pm*,*<sup>k</sup>*(*x*)}*<sup>m</sup> k*=0.

As regards the Hadamard transform H<sup>1</sup>(*f* , *t*), before introducing the numerical procedure for its computation, we prove that H<sup>1</sup>(*f* , *t*) presents algebraic singularities at the endpoints of the interval, when the density function *f* satisfies a Dini-type condition. Successively, we introduce a quadrature rule for approximating H<sup>1</sup>(*f* , *t*) always based on the polynomials *Bm*,*<sup>s</sup> f* and useful also for the simultaneous approximation of H(*f* , *t*) and H<sup>1</sup>(*f* , *t*), since the samples of the function *f* at the equidistant nodes employed in the computation of H(*f* , *t*) have been reused to approximate H<sup>1</sup>(*f* , *t*) too. The convergence of such a quadrature rule is proved by using the simultaneous approximation property of the generalized Bernstein polynomials, and similarly to the Hilbert transform case, for the quadrature error we state weighted pointwise estimates.

It comes out that the additional integer parameter *s* introduced by the *Bm*,*<sup>s</sup> f* can be suitable chosen to accelerate the convergence of the quadrature rules for both the transforms H and H1. Moreover, the coefficients of both the quadrature rules are given in a simple compact vectorial form and can be efficiently computed by recurrence.

The outline of the paper is as follows. Section 2 contains some notation and preliminary results concerning the generalized Bernstein polynomials and the Hilbert and Hadamard transforms. The quadrature rules with the corresponding pointwise error estimates can be found in Section 3 where details about the recurrence relations for their coefficients are also given. Section 4 contains additional numerical details for computing the quadrature weights and some numerical tests which show the performance of our procedures and confirm the theoretical estimates. Finally, in Section 5 the proofs are given.

#### **2. Notation and Preliminary Results**

In the sequel C will denote a generic positive constant which may differ at different occurrences. We will write C = C(*<sup>a</sup>*, *b*, ..) to indicate that C is independent of *a*, *b*, ... Moreover, if *A*, *B* > 0 depend on some parameters the notation *A* ∼ *B* means that there are fixed constants C1, C2 > 0 (independent of the parameters in *A*, *B*) such that C1*A* ≤ *B* ≤ C2*A*. For *m* ∈ N, we set *Nm*0 = {0, 1, 2, ... , *m*} and denote by P*m* the space of all algebraic polynomials of degree at most *m*. *Cm* will denote the space of functions with *m* continuous derivatives on [−1, 1] and *C*<sup>0</sup> is the space of the continuous functions on [−1, 1] equipped with the uniform norm *f* := max*x*<sup>∈</sup>[−1,1] | *f*(*x*)|. In *C*0, setting *ϕ*(*x*) := √1 − *x*2, it is possible to define the following *ϕ*-modulus of smoothness by Ditzian and Totik ([22], Theorem 2.1.2)

$$\omega\_{\varphi}^r(f, t) = \sup\_{0 < h \le t} ||\Delta\_{h\varphi}^r f||\_{\prime} \qquad r \in \mathbb{N}$$

where

$$
\Delta\_{h\varphi(x)}^r f(\mathbf{x}) = \sum\_{k=0}^r (-1)^k \binom{r}{k} f\left(\mathbf{x} + (r-2k)\frac{h}{2}\varphi(\mathbf{x})\right).
$$

We recall that ([22], Theorem 2.1.1)

$$
\omega\_{\boldsymbol{\varrho}}^{r}(f,\boldsymbol{t}) \sim \mathbb{K}\_{\mathsf{r},\boldsymbol{\varrho}}(f,\boldsymbol{t}^{r}) := \inf \{ \|f - \boldsymbol{\varrho}\| + t^{r} \|\boldsymbol{\varrho}^{(r)}\boldsymbol{\varrho}^{r}\| \, : \, \boldsymbol{\varrho}^{(r-1)} \in \mathcal{AC} \}, \tag{3}
$$

where AC denotes the space of the absolutely continuous functions on [−1, 1].

By means of this modulus of continuity, we define the subspace *DT* ⊂ *C*<sup>0</sup> of all functions satisfying a Dini-type condition, namely

$$DT = \left\{ f \in \mathbb{C}^0 : \quad \int\_0^1 \frac{\omega\_\eta(f, u)}{u} du < \infty \right\},\tag{4}$$

where we set *ωϕ*(*f* , *u*) = *<sup>ω</sup>*1*ϕ*(*f* , *<sup>u</sup>*).

> Moreover, the Hölder–Zygmund space of order *λ* > 0 is defined by

$$Z\_{\lambda} = \left\{ f \in \mathbb{C}^{0} : \sup\_{t>0} \frac{\omega\_{\rho}^{r}(f,t)}{t^{\lambda}} < \infty, \quad r > \lambda \right\}, \qquad \lambda > 0,\tag{5}$$

and equipped with the norm

$$\|f\|\_{Z\_{\lambda}} = \|f\| + \sup\_{t>0} \frac{\omega\_{\varrho}^r(f,t)}{t^{\lambda}}, \qquad r > \lambda.$$

The space *Zλ* constitutes a particular case of the Besov-type spaces studied in [23] where it has been proved that ([23], Theorem 2.1)

$$\|f\|\_{Z\_{\lambda}} \sim \sup\_{n\geq 0} (n+1)^{\lambda} E\_n(f), \qquad E\_n(f) := \inf\_{P\in \mathbb{P}\_n} \|f - P\|.\tag{6}$$

Such norms' equivalence ensures that the previous definitions are indeed independent of the integer *r* > *λ* we choose. Moreover, by (6) we ge<sup>t</sup> an interesting characterization of the continuous functions *f* ∈ *Zλ* in terms of the rate of convergence to zero of the errors of best uniform polynomial approximation of *f* , which in turn is closely related to the smoothness of *f* (see e.g., Corollary 8.2.2 and Theorem 6.2.2 of [22]). More precisely, for any continuous function *f* and any *λ* > 0 we have

$$\forall f \in Z\_{\lambda} \iff E\_n(f) = \mathcal{O}(n^{-\lambda}) \iff \omega\_q^r(f, t) = \mathcal{O}(t^{\lambda}), \,\forall r > \lambda. \tag{7}$$

Furthermore, by the previous definitions, for any *r* > *λ* > 0 we ge<sup>t</sup>

$$
\omega\_{\varphi}^{\mathsf{r}}(f, t) \le \mathcal{C}t^{\lambda} \|f\|\_{\mathbb{Z}\_{\lambda'}} \qquad \forall f \in \mathbb{Z}\_{\lambda \mathsf{r}} \qquad \mathcal{C} \ne \mathcal{C}(f, t). \tag{8}
$$

In the case that *λ* = *k* ∈ N, by virtue of (3), we have that the space *Zλ* is equivalent to the Sobolev space

$$\mathcal{W}\_k = \left\{ f^{(k-1)} \in \mathcal{AC} : \left| \left| f^{(k)} \mathfrak{p}^k \right| \right| < \infty \right\}, \qquad k \in \mathbb{N}\_{\prec}$$

equipped with the norm *f Wk*= *f* + *f* (*k*)*ϕ<sup>k</sup>*, and we recall that ([22], Theorem 4.2.1)

$$
\omega\_{\varphi}^{r}(f, t) = \mathcal{O}(t^{r}) \quad \Longleftrightarrow \quad f \in \mathbb{W}\_{\mathbf{r}} \qquad \qquad \forall r \in \mathbb{N}\_{\star} \tag{9}
$$

$$
\omega\_{\varrho}^r(f, t) = o(t^r) \quad \Longrightarrow \quad f \in \mathbb{P}\_{r-1}, \qquad \forall r \in \mathbb{N}. \tag{10}
$$

Finally, since we are going to use a result of [24] based also on the ordinary moduli of smoothness (cf. Theorem 2), we conclude the subsection by recalling their definition and some properties.

$$\text{Set}$$

$$
\Delta\_h f(\mathbf{x}) := f\left(\mathbf{x} + \frac{h}{2}\right) - f\left(\mathbf{x} - \frac{h}{2}\right), \qquad \Delta\_h^r := \Delta\_h(\Delta\_h^{r-1}), \qquad r > 1,
$$

the ordinary *r*-th modulus of smoothness of *f* is defined as

$$\omega^r(f, t) := \sup\_{0 < h \le t} ||\Delta^r\_h f||\_{\prime} \qquad r \in \mathbb{N}.$$

It is related with the *ϕ* modulus by

$$
\omega^r\_{\varrho}(f, t) \le \mathcal{C}\omega^r(f, t), \qquad \mathcal{C} \ne \mathcal{C}(f, t).
$$

Moreover, set

$$\mathcal{W}\_k := \left\{ f^{k-1} \in \mathcal{AC} : \left| \left| f^{(k)} \right| \right| < \infty \right\}, \qquad k \in \mathbb{N}$$

we have the following analogues of (9) and (10) (see e.g., [22], p. 40)

$$
\omega^r(f, t) = \mathcal{O}(t^r) \quad \Longleftrightarrow \quad f \in \mathbb{W}\_r \subseteq \mathbb{W}\_r \tag{11}
$$

$$
\omega^r(f, t) = o(t^r) \quad \Longrightarrow \quad f \in \mathbb{P}\_{r-1}, \qquad \qquad \forall r \in \mathbb{N} \tag{12}
$$

#### *2.1. Generalized Bernstein Polynomials in* [−1, 1]

For any *f* ∈ *C*<sup>0</sup> the *m*-th Bernstein polynomial *Bm f* is defined as

$$B\_m f(\mathbf{x}) := \sum\_{k=0}^m f(t\_k) p\_{m,k}(\mathbf{x}), \qquad t\_k := \frac{2k}{m} - 1, \qquad \mathbf{x} \in [-1, 1], \tag{13}$$

where

$$p\_{m,k}(\mathbf{x}) := \frac{1}{2^m} \binom{m}{k} (1+\mathbf{x})^k (1-\mathbf{x})^{m-k}, \qquad k = 0, 1, \ldots, m,\tag{14}$$

are the so-called fundamental Bernstein polynomials. They satisfy the following recurrence relation

$$p\_{m,k}(\mathbf{x}) = \frac{(1-\mathbf{x})}{2} p\_{m-1,k}(\mathbf{x}) + \frac{(1+\mathbf{x})}{2} p\_{m-1,k-1}(\mathbf{x}), \quad m = 1,2,\ldots,\tag{15}$$

with *p*0,0(*x*) = 1 and *pm*,*<sup>k</sup>*(*x*) = 0 if *k* < 0 or *k* > *m*.

The computation of *Bm f*(*x*) can be efficiently performed by the de Casteljau algorithm (see e.g., [25]).

Based on the polynomial *Bm f* , the generalized Bernstein polynomial *Bm*,*<sup>s</sup> f* were introduced separately in [16–18]. They are defined as the following Boolean sums

$$B\_{m,s}f = f - (f - B\_{\mathfrak{m}}f)^s, \qquad s \in \mathbb{N}, \quad B\_{\mathfrak{m},1}f = B\_{\mathfrak{m}}f.$$

Please note that *Bm*,*<sup>s</sup> f* ∈ P*m* and it can be expressed as

$$B\_{m,s}f(\mathbf{x}) = \sum\_{j=0}^{m} p\_{m,j}^{(s)}(\mathbf{x})f(t\_j), \qquad t\_j := \frac{2j}{m} - 1, \qquad \mathbf{x} \in [-1, 1], \tag{16}$$

.

where

$$p\_{m,j}^{(s)}(\mathbf{x}) = \sum\_{i=1}^{s} \binom{s}{i} (-1)^{i-1} B\_{\mathbf{m}}^{i-1} p\_{m,j}(\mathbf{x}) \qquad B\_{\mathbf{m}}^{i} = B\_{\mathbf{m}}(B\_{\mathbf{m}}^{i-1}), \quad i = 1, \ldots, s. \tag{17}$$

An estimate of the error *Rm*,*<sup>s</sup> f* := *f* − *Bm*,*<sup>s</sup> f* in uniform norm is given by the following theorem

**Theorem 1.** *[21] Let s* ∈ N *be fixed. Then for all m* ∈ N *and any f* ∈ *C*<sup>0</sup> *we have*

$$\|\|f - B\_{\mathfrak{m}, \mathfrak{s}} f\|\| \le \mathcal{C} \left\{ \omega\_{\mathfrak{q}}^{2s} \left( f, \frac{1}{\sqrt{m}} \right) + \frac{\|f\|}{m^{\mathfrak{s}}} \right\}, \qquad \mathbb{C} \ne \mathcal{C}(m, f).$$

*Moreover, for any* 0 < *λ* ≤ 2*s we obtain*

$$||f - B\_{m,s}f|| = \mathcal{O}(m^{-\frac{\lambda}{2}}), \; m \to \infty \iff \omega\_{\varphi}^{2s}(f, t) = \mathcal{O}(t^{\lambda})$$

*and the o-saturation class is characterized by*

$$||f - B\_{m,s}f|| = o(m^{-s}) \iff f \text{ is a linear function.}$$

**Remark 1.** *Please note that unlike the basic Bernstein operator Bm, the Boolean sums Bm*,*<sup>s</sup> may accelerate the speed of convergence as the smoothness of f increases. In particular, taking into account (7)–(9), from Theorem 1 we deduce*

$$\|\|f - B\_{m, \mathcal{S}}f\|\| \le \mathcal{C} \frac{\|\|f\|\|\_{\mathcal{Z}\_{\lambda}}}{\sqrt{m^{\lambda}}}, \qquad \forall f \in \mathcal{Z}\_{\lambda} \text{ with } 0 < \lambda \le 2\text{s}, \ \mathcal{C} \ne \mathcal{C}(m, f). \tag{18}$$

About the simultaneous approximation of the derivatives of *f* by means of the sequence {*Bm*,*<sup>s</sup> f* }*<sup>m</sup>*, the following estimate holds.

**Theorem 2** ([24], Corollary 1.6)**.** *Let s* ∈ N *be fixed. Then for all m*, *k* ∈ N *and any f* ∈ *C<sup>k</sup> we have*

$$\|(f - B\_{m,s}f)^{(k)}\| \le \mathcal{C} \begin{cases} \omega\_{\varphi}^{2s} \left(f', \frac{1}{\sqrt{m}}\right) + \omega^s \left(f', \frac{1}{m}\right) + \omega \left(f', \frac{1}{m^s}\right), & k = 1, \\\\ \omega\_{\varphi}^{2s} \left(f^{(k)}, \frac{1}{\sqrt{m}}\right) + \omega^s \left(f^{(k)}, \frac{1}{m}\right) + \frac{\|f^{(k)}\|}{m^s}, & k \ge 2, \end{cases}$$

*where ω* := *ω*<sup>1</sup> *and* C = C(*<sup>m</sup>*, *f*). **Remark 2.** *From Theorem 2 and (9), (11) we deduce the following maximum rate of convergence*

$$\|f^{(k)} - (B\_{m,s}f)^{(k)}\| = \mathcal{O}\left(\frac{1}{m^s}\right) \ (m \to \infty) \qquad \forall f \in \mathbb{C}^k \cap \bar{\mathcal{W}}\_{2s+k\prime} \quad k \in \mathbb{N}.\tag{19}$$

Finally, we give some details on the computation of *Bm*,*<sup>s</sup> f* and its first derivative.

Observe that a more convenient representation of the fundamental polynomials {*p*(*s*) *<sup>m</sup>*,*<sup>i</sup>*}*mi*=<sup>0</sup> is given by [26] (see also [27])

$$\mathbf{p}\_{\mathfrak{m}}^{(s)}(\mathbf{x}) = \mathbf{p}\_{\mathfrak{m}}(\mathbf{x}) \mathbb{C}\_{m,s}, \qquad \forall \mathbf{x} \in [-1, 1], \tag{20}$$

where we set

$$\begin{array}{rcl} \mathbf{p}\_{m}^{(s)}(\mathbf{x}) &:=& [p\_{m,0}^{(s)}(\mathbf{x}), p\_{m,1}^{(s)}(\mathbf{x}), \dots, p\_{m,m}^{(s)}(\mathbf{x})],\\ \mathbf{p}\_{m}(\mathbf{x}) &:=& [p\_{m,0}(\mathbf{x}), \dots, p\_{m,m}(\mathbf{x})], \end{array}$$

and *Cm*,*<sup>s</sup>* ∈ R(*m*+<sup>1</sup>)×(*m*+<sup>1</sup>) is the changing basis matrix given by

$$\mathbb{C}\_{m,s} = \mathcal{Z} + (\mathcal{Z} - \mathcal{A}) + \dots + (\mathcal{Z} - \mathcal{A})^{s-1}, \quad \mathbb{C}\_{m,1} = \mathcal{Z},\tag{21}$$

where I denotes the identity matrix and A is the matrix with entries

$$\mathcal{A} := (\mathcal{A}\_{i,j}) \qquad \mathcal{A}\_{i,j} := p\_{m,j}(t\_i), \qquad i, j \in N\_0^m. \tag{22}$$

Let *c*(*<sup>m</sup>*,*<sup>s</sup>*) *i*,*j* be the entry (*i*, *j*) of *Cm*,*s*, then in view of (20) we ge<sup>t</sup>

$$p\_{m,j}^{(s)}(\mathbf{x}) = \sum\_{i=0}^{m} p\_{m,i}(\mathbf{x}) c\_{i,j}^{(m,s)}, \qquad \forall \mathbf{x} \in [-1, 1], \tag{23}$$

and consequently

$$B\_{m,s}f(\mathbf{x}) = \sum\_{i=0}^{m} \left( \sum\_{j=0}^{m} c\_{i,j}^{(m,s)} f(t\_j) \right) p\_{m,i}(\mathbf{x}).\tag{24}$$

In matrix-vector notation this reads as

$$B\_{m,s}f(\mathbf{x}) = \mathbf{p}\_m(\mathbf{x})\mathbf{C}\_{m,s}\mathbf{f},\tag{25}$$

with

$$\mathbf{f} := [f(t\_0), f(t\_1), \dots, f(t\_m)]^T.$$

As regards the derivatives of the Bernstein polynomials *Bm*,*<sup>s</sup> f* , we obtain from (25) the following useful representation

$$(B\_{m,s}f)'(\mathbf{x}) = \mathbf{p}\_m^1(\mathbf{x})\mathbf{C}\_{m,s}\mathbf{f},\tag{26}$$

where

$$\mathbf{p}^1\_m(\mathbf{x}) := [p'\_{m,0}(\mathbf{x}), \dots, p'\_{m,m}(\mathbf{x})]\_\prime$$

Finally, concerning the entries of the vector **<sup>p</sup>**1*m*(*x*), i.e., the derivatives of the fundamental Bernstein polynomials at *x* ∈ [−1, 1], starting from the definition (14), easy computations yield the expression

.

$$p'\_{m,k}(\mathbf{x}) = \frac{m}{2} \left( p\_{m-1,k-1}(\mathbf{x}) - p\_{m-1,k}(\mathbf{x}) \right), \qquad k = 0, \ldots, m,\tag{27}$$

with the usual convention *pm*,*j*(*x*) = 0 if *j* ∈/ <sup>N</sup>*<sup>m</sup>*0

#### *2.2. Hilbert and Hadamard Transforms*

First, we recall the finite Hilbert transform H(*f* , *t*) is defined by

$$\mathcal{H}(f,t) = \int\_{-1}^{1} \frac{f(\mathbf{x})}{\mathbf{x} - t} d\mathbf{x} = \lim\_{\mathbf{c} \to 0^{+}} \left[ \int\_{-1}^{t-\mathbf{c}} \frac{f(\mathbf{x})}{\mathbf{x} - t} d\mathbf{x} + \int\_{t+\mathbf{c}}^{1} \frac{f(\mathbf{x})}{\mathbf{x} - t} d\mathbf{x} \right]. \tag{28}$$

The following theorem provides a sufficient condition for the existence of H(*f* , *t*) in (−1, 1) when the density function *f* satisfies a Dini-type condition. It also shows the behavior of H(*f* , *t*) as *t* approaches the endpoints of the interval [−1, 1].

**Theorem 3** ([28], Theorem 2.1)**.** *For any f* ∈ *DT and* |*t*| < 1*, we have*

$$\log^{-1}\left(\frac{\varepsilon}{1-t^2}\right)|\mathcal{H}(f,t)| \le \mathcal{C}\left(\|f\| + \int\_0^1 \frac{\omega\_\varphi(f,u)}{u} du\right), \qquad \mathcal{C} \ne \mathcal{C}(f,t).$$

Consider now H<sup>1</sup>(*f* , *t*), which is the finite part of the divergent integral in the Hadamard sense (see for instance [5,10,14]), i.e., defined for |*t*| < 1 as (cf. [5], Equation (1.3))

$$\mathcal{H}^1(f,t) = \oint\_{-1}^{1} \frac{f(\mathbf{x})}{(\mathbf{x}-t)^2} d\mathbf{x} = \lim\_{\varepsilon \to 0^+} \left[ \int\_{-1}^{t-\varepsilon} \frac{f(\mathbf{x})}{(\mathbf{x}-t)^2} d\mathbf{x} + \int\_{t+\varepsilon}^{1} \frac{f(\mathbf{x})}{(\mathbf{x}-t)^2} d\mathbf{x} - \frac{2f(t)}{\varepsilon} \right]. \tag{29}$$

An alternative definition interprets H<sup>1</sup>(*f* , *t*) as the first derivative of H(*f*) at *t*, i.e.,

$$\mathcal{H}^1(f, t) = \frac{d}{dt} \int\_{-1}^{1} \frac{f(\mathbf{x})}{\mathbf{x} - t} d\mathbf{x}, \quad |t| < 1,\tag{30}$$

being (30) and (29) equivalent when *f* is an Hölder continuous function (see [5]).

By the following theorem, we are going to state that for all functions *f* with *f* ∈ *DT*, we have that H<sup>1</sup>(*f* , *t*) exists finite for any |*t*| < 1, while it algebraically diverges at the endpoints of the interval [−1, 1].

**Theorem 4.** *Let the function f* ∈ *C*<sup>1</sup> *be s.t. f* ∈ *DT. Then, for any* −1 < *t* < 1*, we have*

$$|\varphi^2(t)|\mathcal{H}^1(f,t)| \le \mathcal{C}\left(\|f\| + \int\_0^1 \frac{\omega\_\varphi(f',\tau)}{\tau} d\tau\right), \qquad \mathcal{C} \ne \mathcal{C}(f,t). \tag{31}$$

#### **3. The Quadrature Rules**

#### *3.1. On the Computation of* H(*f* , *t*)

The numerical method for computing H(*f* , *t*) is based on the following proposition

**Proposition 1.** *For any f* ∈ *DT and for any* |*t*| < 1*, we have*

$$\mathcal{H}(f,t) = \int\_{-1}^{1} \frac{f(\mathbf{x}) - f(t)}{\mathbf{x} - t} d\mathbf{x} + f(t) \log \left(\frac{1 - t}{1 + t}\right),\tag{32}$$

In view of (32), we mainly must approximate the function

$$\mathcal{F}(f, t) := \int\_{-1}^{1} \frac{f(\mathbf{x}) - f(t)}{\mathbf{x} - t} d\mathbf{x}, \qquad -1 < t < 1. \tag{33}$$

For any given *s* ∈ N, by means of the polynomial sequence {*Bm*,*<sup>s</sup> f* }*<sup>m</sup>*, we define the following approximation of F(*f* , *t*)

$$\mathcal{F}\_{\mathbf{m},\mathbf{s}}(f,\mathbf{t}) := \int\_{-1}^{1} \frac{B\_{\mathbf{m},\mathbf{s}}f(\mathbf{x}) - B\_{\mathbf{m},\mathbf{s}}f(\mathbf{t})}{\mathbf{x} - \mathbf{t}} d\mathbf{x} \tag{34}$$

and let

$$\Phi\_{\mathfrak{m},\mathfrak{s}}(f,t) := \mathcal{F}(f,t) - \mathcal{F}\_{\mathfrak{m},\mathfrak{s}}(f,t), \qquad -1 < t < 1. \tag{35}$$

Please note that

$$\mathcal{F}\_{m,s}(f,t) = \sum\_{j=0}^{m} f(t\_j) \int\_{-1}^{1} \frac{p\_{m,j}^{(s)}(\mathbf{x}) - p\_{m,j}^{(s)}(t)}{\mathbf{x} - t} d\mathbf{x} =: \sum\_{j=0}^{m} f(t\_j) D\_{m,j}^{(s)}(t),\tag{36}$$

and taking into account the relation in (20) between the bases {*pm*,*<sup>i</sup>*(*x*)}*<sup>i</sup>*∈N*<sup>m</sup>*<sup>0</sup>and {*p*(*s*) *<sup>m</sup>*,*<sup>i</sup>*(*x*)}*<sup>i</sup>*∈N*<sup>m</sup>*<sup>0</sup>, we have

$$D\_{m,j}^{(s)}(t) = \sum\_{i=0}^{m} c\_{i,j}^{(m,s)} \int\_{-1}^{1} \frac{p\_{m,i}(\mathbf{x}) - p\_{m,i}(t)}{\mathbf{x} - t} d\mathbf{x} =: \sum\_{i=0}^{m} c\_{i,j}^{(m,s)} q\_{m,i}(t). \tag{37}$$

About the computation of {*qm*,*<sup>i</sup>*(*t*)}*<sup>i</sup>*∈N*<sup>m</sup>*<sup>0</sup>we can prove the following

**Proposition 2.** *For the sequence* {*qm*,*<sup>i</sup>*(*t*)} *the following recurrence relation holds*

$$\begin{aligned} q\_{0,0}(t) &= \quad 0, \quad q\_{1,0}(t) = -1, \quad q\_{1,1}(t) = 1\\ q\_{m,0}(t) &= \quad \frac{(1-t)}{2} q\_{m-1,0}(t) - \frac{1}{m} \\ q\_{m,k}(t) &= \quad \frac{(1-t)}{2} q\_{m-1,k}(t) + \frac{(1+t)}{2} q\_{m-1,k-1}(t), \quad 1 \le k \le m-1\\ q\_{m,m}(t) &= \quad \frac{(1+t)}{2} q\_{m-1,m-1}(t) + \frac{1}{m}.\end{aligned}$$

Setting

$$\mathbf{q}\_{\rm m}(t) = [q\_{\rm m,0}(t), q\_{\rm m,1}(t), \dots, q\_{\rm m,m}(t)],\tag{38}$$

the quadrature rule (34) takes the form

$$\mathcal{F}\_{m,s}(f,t) = \mathfrak{q}\_m(t)\,\mathbb{C}\_{m,s}\,\mathrm{f}.\tag{39}$$

This formula can be directly applied to approximate H(*f* , *t*) in the form given in (32), i.e., supposed to know *f*(*t*), we can approximate

$$\mathcal{H}(f,t) = \mathcal{F}(f,t) + \log\left(\frac{1-t}{1+t}\right)f(t) \approx \mathcal{F}\_{m,s}(f,t) + \log\left(\frac{1-t}{1+t}\right)f(t).$$

In the case only the samples *f*(*tj*) = *f*(*t*) are given, we propose to approximate H(*f* , *t*) by

$$\mathcal{H}\_{m,s}(f,t) := \mathcal{F}\_{m,s}(f,t) + \log\left(\frac{1-t}{1+t}\right) B\_{m,s} f(t). \tag{40}$$

Using matrix-vector notation as in (39) and (25) we arrive at

$$\mathcal{H}\_{\mathfrak{m},\mathfrak{s}}(f,\mathfrak{t}) = \left[\mathfrak{q}\_{\mathfrak{m}}(\mathfrak{t}) + \log\left(\frac{1-\mathfrak{t}}{1+\mathfrak{t}}\right)\mathfrak{p}\_{\mathfrak{m}}(\mathfrak{t})\right]\mathbb{C}\_{\mathfrak{m},\mathfrak{s}}\mathfrak{f}.\tag{41}$$

The quadrature error can then be expressed as

$$\begin{aligned} \mathcal{E}\_{\mathfrak{m},\mathfrak{s}}(f,t) &:= \quad \mathcal{H}(f,t) - \mathcal{H}\_{\mathfrak{m},\mathfrak{s}}(f,t) \\ &= \quad \Phi\_{\mathfrak{m},\mathfrak{s}}(f,t) + \log\left(\frac{1-t}{1+t}\right) \left[f(t) - B\_{\mathfrak{m},\mathfrak{s}}f(t)\right] \end{aligned}$$

About the convergence of both the previous quadrature rules F*<sup>m</sup>*,*<sup>s</sup>* and H*<sup>m</sup>*,*s*, the following theorem estimates the associate errors, Φ*<sup>m</sup>*,*<sup>s</sup>* and E*<sup>m</sup>*,*<sup>s</sup>* respectively.

**Theorem 5.** *Let be* −1 < *t* < 1*. Then for any f* ∈ *DT, we have*

$$\log^{-1}\left(\frac{\varepsilon}{1-t^2}\right)|\mathcal{E}\_{m,s}(f,t)| \le \mathcal{C}\log m\left[\omega\_{\varphi}^{2s}\left(f,\frac{1}{\sqrt{m}}\right) + \frac{\|f\|}{m^s}\right] + \mathcal{C}\int\_0^{\frac{1}{m}}\frac{\omega\_{\varphi}^r(f,u)}{u}du,\tag{42}$$

*with r* < *m and* C = C(*<sup>m</sup>*, *f* , *t*)*.*

> *The same estimate continues to hold for* <sup>Φ</sup>*<sup>m</sup>*,*<sup>s</sup>*(*f* , *t*)*, which satisfies also*

$$|\Phi\_{m,s}(f,t)| \le \mathcal{C}\left[\omega\_{\varphi}^{2s}\left(f', \frac{1}{\sqrt{m}}\right) + \omega^{s}\left(f', \frac{1}{m}\right) + \omega\left(f', \frac{1}{m^s}\right)\right], \qquad \forall f \in \mathbb{C}^1. \tag{43}$$

*with* C = C(*<sup>m</sup>*, *f* , *t*)*.*

In case of smoother functions, from the previous estimates and (7), (8), (9) and (11), we easily ge<sup>t</sup> **Corollary 1.** *Let be* −1 < *t* < 1*. Then for all f* ∈ *Zλ, with* 0 < *λ* ≤ 2*s, we have*

$$|\mathcal{E}\_{m,\varepsilon}(f,t)| \le \mathcal{C} \log\left(\frac{\varepsilon}{1-t^2}\right) \frac{||f||\_{Z\_\lambda}}{m^{\lambda/2}} \log m, \qquad \mathcal{C} \ne \mathcal{C}(m,f,t),$$

*and the same holds for* |<sup>Φ</sup>*<sup>m</sup>*,*<sup>s</sup>*(*f* , *<sup>t</sup>*)|*. Moreover, for all f* ∈ *<sup>C</sup>k*+1*, with* 1 ≤ *k* ≤ 2*s, we have*

$$|\Phi\_{m,\mathfrak{s}}(f,t)| \le \frac{\mathcal{C}}{m^{k/2}}, \qquad \mathcal{C} \ne \mathcal{C}(m,t).$$

In conclusion, we remark that in proving Theorem 5 we also stated the following relations between the quadrature errors and the approximation errors by generalized Bernstein polynomials

$$\begin{aligned} |\mathcal{E}\_{\mathfrak{m},\mathfrak{s}}(f,t)| &\leq \mathcal{C}\log\left(\frac{\varepsilon}{1-t^2}\right) \left[\log m \, \|f - B\_{\mathfrak{m},\mathfrak{s}}f\| + \int\_0^1 \frac{\omega\_q^r(f,u)}{u} du\right], \qquad \forall f \in DT\_s, \\ |\Phi\_{\mathfrak{m},\mathfrak{s}}(f,t)| &\leq \mathcal{C}\|(f - B\_{\mathfrak{m},\mathfrak{s}}f)'\|\_{\prime} \qquad \forall f \in \mathcal{C}^1. \end{aligned}$$

#### *3.2. On the Computation of* H<sup>1</sup>(*f* , *t*)

We are going to use the following proposition

**Proposition 3.** *For any f* ∈ *C*<sup>1</sup> *s.t. f* ∈ *DT and for all* |*t*| < 1*, we have*

$$\mathcal{H}^1(f,t) = \int\_{-1}^{1} \frac{f(\mathbf{x}) - f(t) - f'(t)(\mathbf{x}-t)}{(\mathbf{x}-t)^2} d\mathbf{x} + f'(t) \log\left(\frac{1-t}{1+t}\right) - f(t) \left[\frac{2}{1-t^2}\right].\tag{44}$$

Let

$$\mathcal{F}^1(f,t) := \int\_{-1}^1 \frac{f(\mathbf{x}) - f(t) - f'(t)(\mathbf{x}-t)}{(\mathbf{x}-t)^2} d\mathbf{x}, \qquad -1 < t < 1.$$

Supposed both *f* (*t*) and *f*(*t*) are known, then we can ge<sup>t</sup> the exact value of the non-integral part at the right-hand side of (44). In this case, the numerical computation of H<sup>1</sup>(*f* , *t*) can be performed by the following quadrature rule

$$\mathcal{F}^1(f,t) = \mathcal{F}^1\_{m,s}(f,t) + \Phi^1\_{m,s}(f,t), \tag{45}$$

where

$$\mathcal{F}^{1}\_{m,s}(f,t) := \int\_{-1}^{1} \frac{B\_{m,s}f(\mathbf{x}) - B\_{m,s}f(t) - (B\_{m,s}f)'(t)(\mathbf{x}-t)}{(\mathbf{x}-t)^2} d\mathbf{x} = \frac{d}{dt}\mathcal{F}\_{m,s}(f,t). \tag{46}$$

Using (36), (37) and (46), we ge<sup>t</sup>

$$\begin{array}{rcl} \mathcal{F}^{1}\_{m,\boldsymbol{\upepsilon}}(f,t) &:=& \sum\_{j=0}^{m} f(t\_{\boldsymbol{\upepsilon}}) \sum\_{j=0}^{m} \frac{d}{dt} D^{(\boldsymbol{\upepsilon})}\_{m,j}(t),\\ \frac{d}{dt} D^{(\boldsymbol{\upepsilon})}\_{m,j}(t) &:=& \sum\_{i=0}^{m} c^{(m,i)}\_{i,j} d\_{m,i}(t), \qquad d\_{m,i}(t) := \boldsymbol{q}'\_{m,i}(t), \end{array} \tag{47}$$

where the polynomials *dm*,*<sup>i</sup>*(*t*), *i* = 0, . . . , *m*, can be computed recursively, according to

**Proposition 4.** *The sequence dm*,*<sup>i</sup>*(*t*), *i* = 0, . . . , *m*, *satisfies the following recurrence relation*

$$\begin{array}{rcl}d\_{1,0}(t)&=&0,&d\_{1,1}(t)=0,\\d\_{m,0}(t)&=&\frac{(1-t)}{2}d\_{m-1,0}(t)-\frac{1}{2}q\_{m-1,0}(t),\\\\d\_{m,k}(t)&=&\frac{(1-t)}{2}d\_{m-1,k}(t)-\frac{1}{2}q\_{m-1,k}(t)+\frac{(1+t)}{2}d\_{m-1,k-1}(t)+\frac{1}{2}q\_{m-1,k-1}(t),\\\\d\_{m,m}(t)&=&\frac{(1+t)}{2}d\_{m-1,m-1}(t)+\frac{1}{2}q\_{m-1,m-1}(t).\\\end{array}$$

The previous recurrence relation can be easily deduced by Proposition 2. Let

$$\mathbf{d}\_{m}(t) = \begin{bmatrix} d\_{m,0}(t), d\_{m,1}(t), \dots, d\_{m,m}(t) \end{bmatrix} , \tag{48}$$

then the quadrature rule (46) takes the following form

$$\mathcal{F}\_{m,s}^{1}(f,t) = \mathbf{d}\_{m}(t)\,\mathbb{C}\_{m,s}\,\mathrm{f.}\tag{49}$$

In the case that only the vector **f** is known, we have to approximate also the non-integral part in (44) and we propose the following quadrature rule

$$
\mathcal{H}^1(f, t) = \mathcal{H}^1\_{m, s}(f, t) + \mathcal{E}^1\_{m, s}(f, t),
\tag{50}
$$

where <sup>E</sup><sup>1</sup>*m*,*<sup>s</sup>*(*f* , *t*) denotes the error and

$$\mathcal{H}^1\_{m,s}(f,t) := \mathcal{F}^1\_{m,s}(f,t) + \log\left(\frac{1-t}{1+t}\right) \left(B\_{m,s}f\right)'(t) - \frac{2}{1-t^2} \, B\_{m,s}f(t). \tag{51}$$

By (49), (25) and (26), the rule in vector form is given by

$$\mathcal{H}\_{m,s}^{1}(f,t) = \left[\mathbf{d}\_{m}(t) + \log\left(\frac{1-t}{1+t}\right)\mathbf{p}\_{m}^{1}(t) - \frac{2}{1-t^{2}}\mathbf{p}\_{m}(t)\right]\mathbb{C}\_{m,s}\mathbf{f}.\tag{52}$$

We point out that both the rules (49) and (52) are based on the same data vector **f** used in the rules (39) and (41). We see that our method allows simultaneous approximation of the Hilbert transform H(*f* , *t*) and its first derivative H<sup>1</sup>(*f* , *t*) for |*t*| < 1 by using the same samples of the function *f* .

About the convergence of the quadrature rules <sup>F</sup>1*m*,*<sup>s</sup>* and <sup>H</sup><sup>1</sup>*m*,*s*, the following theorem estimates the associate errors <sup>Φ</sup>1*m*,*<sup>s</sup>* and <sup>E</sup><sup>1</sup>*m*,*<sup>s</sup>* by means of the error of approximation when *f* and *f* are approximated by generalized Bernstein polynomials.

**Theorem 6.** *Let be* −1 < *t* < 1*. Then for any f* ∈ *C*<sup>1</sup> *s.t. f* ∈ *DT, we have*

$$|\mathcal{L}(1-t^2)|\mathcal{E}\_{m,\varepsilon}^1(f,t)| \le \mathcal{C}\left[||f - B\_{m,\varepsilon}f|| + \log m||(f - B\_{m,\varepsilon}f)'|| + \int\_0^{\frac{1}{m}} \frac{\omega\_\varphi^r(f',u)}{u} du\right] \tag{53}$$

*with r* < *m and* C = C(*<sup>m</sup>*, *f* , *t*)*.*

*The same estimate can also be applied to* <sup>Φ</sup>1*m*,*<sup>s</sup>*(*f* , *t*)*, which in the case of continuously differentiable functions in C*<sup>2</sup> *satisfies also*

$$|\Phi^1\_{m,s}(f,t)| \le \mathcal{C} \|(f - B\_{m,s}f)^{\prime\prime}\|\_{\prime} \qquad \forall f \in \mathbb{C}^2. \tag{54}$$

*with* C = C(*<sup>m</sup>*, *f* , *t*)*.*

Thanks to this theorem, by Theorem 1 and Theorem 2 we can easily ge<sup>t</sup> estimates of the quadrature errors <sup>E</sup><sup>1</sup>*m*,*<sup>s</sup>* and <sup>Φ</sup>1*m*,*<sup>s</sup>* based on several moduli of smoothness of *f* and *f* . For brevity we omit the details and only state the following result, which easily follows by using (9) and (11) in the estimates of Theorems 1 and 2, which in turn are used in Theorem 6.

**Corollary 2.** *Let* −1 < *t* < 1 *and s* ∈ N*. For all functions f* ∈ *<sup>C</sup>k*+1*, with* 1 ≤ *k* ≤ 2*s, and for sufficiently large m* ∈ N*, we have*

$$|(1 - t^2)|\mathcal{E}\_{m,s}^1(f, t)| \le \frac{\mathcal{C}}{m^{k/2}} \log m, \qquad \mathcal{C} \ne \mathcal{C}(m, t).$$

*The same estimate holds for* <sup>Φ</sup>1*m*,*<sup>s</sup>*(*f* , *t*)*, which also satisfies*

$$|\Phi\_{m,s}^1(f,t)| \le \frac{\mathcal{C}}{m^{k/2}}, \qquad \mathcal{C} \ne \mathcal{C}(m,t), \qquad \forall f \in \mathcal{C}^{k+2}, \ 1 \le k \le 2s.$$

#### **4. Numerical Details and Some Experiments**

First, we recall some details given in [19] about the computation of the matrix *Cm*,*<sup>s</sup>* in (21). We start from the matrix A defined in (22). It will be constructed by rows by making use of the triangular scheme in (15) and thus for each row *m*<sup>2</sup> long operations are required. On the other hand, since A is centrosymmetric, i.e., A = J AJ , where J is the counter-identity matrix of order *m* + 1 (J*<sup>i</sup>*,*<sup>j</sup>* = *<sup>δ</sup>i*,*m*−*j*, *i*, *j* ∈ *Nm*0 , being *<sup>δ</sup>h*,*<sup>k</sup>* the Kronecker delta), it will be enough to compute only the first *m*+1 2 or *m*+2 2 rows, according to *m* is odd or even, respectively. Therefore, the construction of A requires about *m*32 long operations. Furthermore, since the product of two centrosymmetric matrices can be performed in almost *m*34 long operations [29], the matrix *Cm*,*<sup>s</sup>* in (21) can be constructed in almost (*s* − <sup>2</sup>)*m*3/4 long operations, instead of (*s* − <sup>2</sup>)*m*<sup>3</sup> ones, i.e., with a saving of about the 75%. A more significant reduction is achieved when the parameter *s* = 2*p*, *p* ∈ *N*, *p* ≥ 1. Indeed, by using ([30], (14))

$$\mathbb{C}\_{m,2^{p}} = \mathbb{C}\_{m,2^{p-1}} + \left(\mathcal{Z} - \mathcal{A}\right)^{2^{p-1}} \mathbb{C}\_{m,2^{p-1}\prime} \tag{55}$$

the matrix *Cm*,*<sup>s</sup>* can be determined by 2(log2 *s* − 1) products of centrosymmetric matrices and therefore requiring almost *m*32 (log2 *s* − 1) long operations. For instance, for *s* = 256, if we use Equation (21), 255 products of centrosymmetric matrices require about 255 *m*34 ∼ 63.7*m*<sup>3</sup> long operations. On the contrary, if we use (55) then approximatively only 3.5*m*<sup>3</sup> long operations are needed.

Now we propose some numerical tests obtained by approximating H(*f* , *t*) and H<sup>1</sup>(*f* , *t*) by means of the quadrature rules {F*<sup>m</sup>*,*<sup>s</sup>*(*f* , *<sup>t</sup>*)}*m* and {F<sup>1</sup>*m*,*<sup>s</sup>*(*f* , *<sup>t</sup>*)}*<sup>m</sup>*, respectively, namely for a given *t* ∈ (−1, <sup>1</sup>), we compute

$$\begin{aligned} \mathcal{H}(f,t) &\sim \mathcal{F}\_{m,s}(f,t) + \log\left(\frac{1-t}{1+t}\right)f(t),\\ \mathcal{H}^1(f,t) &\sim \mathcal{F}\_{m,s}^1(f,t) + \log\left(\frac{1-t}{1+t}\right)f'(t) - \frac{2}{1-t^2}f(t). \end{aligned}$$

For any choice of *m* we consider different values of *s*. In the tables we report the approximating values of the integrals. All the computations have been performed in double-machine precision (*eps* ∼ 2.22044*e* − 16).
