Next Article in Journal
A Modified Sand Cat Swarm Optimization Algorithm Based on Multi-Strategy Fusion and Its Application in Engineering Problems
Previous Article in Journal
SA-ConvNeXt: A Hybrid Approach for Flower Image Classification Using Selective Attention Mechanism
Previous Article in Special Issue
Exponential Convergence-(t,s)-Weak Tractability of Approximation in Weighted Hilbert Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quantitative Uniform Approximation by Activated Singular Operators

by
George A. Anastassiou
Department of Mathematical Sciences, University of Memphis, Memphis, TN 38152, USA
Mathematics 2024, 12(14), 2152; https://doi.org/10.3390/math12142152
Submission received: 14 June 2024 / Revised: 3 July 2024 / Accepted: 9 July 2024 / Published: 9 July 2024
(This article belongs to the Special Issue Approximation Theory and Applications)

Abstract

:
In this article, we study the approximation properties of activated singular integral operators over the real line. We establish their convergence to the unit operator with rates. The kernels here derive from neural network activation functions and their corresponding density functions. The estimates are mostly sharp, and they are pointwise and uniform. The derived inequalities involve the higher order modulus of smoothness.

1. Introduction

The rate of convergence of singular integrals has been studied earlier in [1,2,3,4]. The seminal monograph [5], Ch. 14, is motivated by it and it is used in this work. Here, we consider some activated singular integral operators over R and we study the degree of approximation to the unit operator with rates over smooth functions. We establish related inequalities involving the higher modulus of smoothness with respect to · . The estimates are pointwise and uniform. Most of the time these are optimal, in the sense that the inequalities are attained by basic functions. Our discussed operators are not, in general, positive.
The new thing here is the reverse process from applied mathematics to pure. Our kernels here are formed by density functions from activation functions related to neural networks approximation; see [6,7].
For the history of the topic, we mention the monograph [5] of 2012, which was the first complete source to deal exclusively with the classic theory of the approximation of singular integrals to the identity-unit operator. In it, the authors studied quantitatively the basic approximation properties of the general Picard, Gauss–Weierstrass and Poisson–Cauchy singular integral operators over the real line, which are not positive linear operators. In particular, they studied the rate of convergence of these operators to the unit operator, as well as the related simultaneous approximation. This is given via inequalities and with the use of the higher-order modulus of the smoothness of the high-order derivative of the involved function. Some of these inequalities are proven to be sharp. Also, they studied the global smoothness preservation property of these operators. Furthermore, they gave asymptotic expansions of Voronovskaya type for the error of approximation. They continued with the study of related properties of the general fractional Gauss–Weierstrass and Poisson–Cauchy singular integral operators. These properties were studied with respect to the L p norm, 1 p . The case of Lipschitz-type function approximation was studied separately and in detail. Furthermore, they presented the corresponding general approximation theory of general singular integral operators with many applications for the trigonometric singular integral, which had not received much focus until then. Of great interest, and motivating the author, are the articles [8,9,10,11,12]. In contemporary intense mathematical activity with the use of neural networks in solving differential equations, our current work is expected to play a pivotal role, as in the classic case using the earlier versions of singular integrals.

2. Background

Everything in this section comes from [5], Ch. 14. Here, we studied the following smooth general singular integral operators Θ r , ξ f , x defined as follows. Let ξ > 0 and let μ ξ be Borel probability measures on R .
For r N and n Z + , we set
α j : = 1 r j r j j n , j = 1 , , r . 1 j = 1 r 1 r j r j j n , j = 0 ,
that is, j = 0 r α j = 1 . Let f : R R be Borel-measurable, and we can define for x R the integral
Θ r , ξ f , x : = j = 0 r α j f x + j t d μ ξ t .
We suppose that. Θ r , ξ f ; x R for all x R . We also say that
Θ r , ξ f , x = j = 0 r α j f x + j t d μ ξ t .
We notice that Θ r , ξ c , x = c , c constant, and
Θ r , ξ f , x f x = j = 0 r α j f x + j t f x d μ ξ t .
Let f C n R , n Z + with the rth modulus of smoothness be finite, i.e.,
ω r f n , h : = sup t h Δ t r f n x , x < , h > 0 ,
where
Δ t r f n x : = j = 0 r 1 r j r j f n x + j t ,
see [13], p. 44.
We need to introduce
δ k : = j = 0 r α j j k , k = 1 , , n N ,
and the even function
G ˜ n t : = 0 t t w n 1 n 1 ! ω r f n , w d w , n N
with
G ˜ 0 t : = ω r f , t , t R .
We mention the following result.
Theorem 1.
The integrals c k , ξ : = t k d μ ξ t , k = 1 , , n are assumed to be finite. Then,
Θ r , ξ f ; x f x k = 1 n f k x k ! δ k c k , ξ G ˜ n t d μ ξ t .
Corollary 1.
Assume ω r f , ξ < , ξ > 0 . Then, it holds for n = 0 that
Θ r , ξ f ; x f x ω r f , t d μ ξ t .
Inequality (10) is sharp.
Theorem 2.
Inequality (10) at x = 0 is attained by f x = x r + n , r , n N with r + n even.
Corollary 2.
Inequality (11) is sharp, that is, it is attained at x = 0 by f x = x r , r even.
Remark 1.
Regarding inequalities (10) and (11), we have the uniform estimates
Θ r , ξ f ; x f x k = 1 n f k x k ! δ k c k , ξ , x G ˜ n t d μ ξ t , n N ,
and
Θ r , ξ f ; x f x , x ω r f , t d μ ξ t , n = 0 .
We also need the next result.
Theorem 3.
Let f C n R , n Z + . Set c k , ξ : = t k d μ ξ t , k = 1 , , n . Assume also that ω r f n , h < , h > 0 . It is also supposed that
t n 1 + t ξ r d μ ξ t < .
Then,
Θ r , ξ f ; x f x k = 1 n f k x k ! δ k c k , ξ , x
ω r f n , ξ n ! t n 1 + t ξ r d μ ξ t .
When n = 0 , the sum on the L.H.S. (15) collapses.
Here, L.H.S. means left-hand side.

3. On Activation Functions

3.1. About Richards’s Curve

Here, we follow [7], Chapter 1.
A Richards’s curve is
φ x = 1 1 + e μ x ; x R , μ > 0 ,
which is strictly increasing on R , and it is a sigmoid function; in particular, this is a generalized logistic function. And it is an activation function in neural networks; see [7], Chapter 1.
It is
lim x + φ x = 1 and lim x φ x = 0 .
We consider the function
G x = 1 2 φ x + 1 φ x 1 , x R ,
which is G ( x ) > 0 , where all x R .
It is
φ 0 = 1 2 , φ x = 1 φ x ,
and
G x = G x , x R .
We also have
G 0 = e μ 1 2 e μ + 1 .
We also obtain
lim x + G x = lim x G x = 0 ,
and G is a bell symmetric function with maximum
G 0 = e μ 1 2 ( e μ + 1 ) .
Theorem 4.
It holds that
i = G x i = 1 , x R .
Theorem 5.
It holds that
G x d x = 1 .
Therefore, G is a density function.
Remark 2.
Therefore, we have
G x = 1 2 φ x + 1 φ x 1 , x R .
(i)
Let x 1 . That is, 0 x 1 < x + 1 . Applying the mean value theorem, we obtain:
G x = 1 2 2 φ η = φ η = μ e μ η 1 + e μ η 2 , μ > 0 ,
where 0 x 1 < η < x + 1 .
 
  Notice that
G x < μ e μ η < μ e μ x 1 , x 1 .
(ii)
Now, let x 1 . That is, x 1 < x + 1 0 . Applying the mean value theorem again, we obtain:
G x = 1 2 2 φ η = φ η = μ e μ η 1 + e μ η 2 ,
where x 1 < η < x + 1 0 .
  Hence, we derive that
G x < μ e μ η < μ e μ x 1 , x 1 .
Consequently, we proved that
G x < μ e μ x 1 , x ( , 1 ] [ 1 , + ) = R 1 , 1 .
Let 0 < ξ 1 ; it holds that
G x ξ < μ e μ x ξ 1 , x ξ , or x ξ .
Clearly, according to Theorem 5, we have
1 ξ G x ξ d x = 1 .
Therefore, 1 ξ G x ξ is a density function, and let d μ ξ x : = 1 ξ G x ξ d x ; that is, μ ξ is a Borel probability measure.
We give the following important result.
Theorem 6.
Let 0 < ξ 1 , and
c k , ξ * : = 1 ξ x k G x ξ d x , k = 1 , , n N .
Then, c k , ξ * are finite and c k , ξ * 0 , as ξ 0 .
Proof. 
We can write
c k , ξ * = 1 ξ x k G x ξ d x =
1 ξ ξ x k G x ξ d x + ξ ξ x k G x ξ d x + ξ x k G x ξ d x .
We notice that
ξ ξ x k G x ξ d x ξ ξ x k G x ξ d x
ξ k ξ ξ G x ξ d x ξ k + 1 G x ξ d x ξ = ξ k + 1 < .
Next, we have
ξ x k G x ξ d x ξ x k G x ξ d x = ξ x k G x ξ d x =
ξ k + 1 ξ x ξ k G x ξ d x ξ
( ξ x < 1 x ξ = : y < )
= ξ k + 1 1 y k G y d y ( 28 ) μ ξ k + 1 1 y k e μ y 1 d y
and we have the gamma function Γ z = 0 t z 1 e t d t , z > 0 .
μ ξ k + 1 e μ 0 y k e μ y d y = ξ k + 1 e μ μ μ k + 1 0 μ y k e μ y d μ y =
μ k e μ ξ k + 1 0 z k e z d z = μ k e μ ξ k + 1 Γ k + 1 .
We have established that
ξ x k G x ξ d x μ k e μ Γ k + 1 ξ k + 1 < .
Finally, we observe that
ξ x k G x ξ d x = ξ k + 1 ξ x ξ k G x ξ d x ξ
< x ξ < x ξ 1
= ξ k + 1 1 z k G z d z ξ k + 1 1 z k G z d z =
ξ k + 1 1 z k G z d z
< z 1 > z = : y 1
= ξ k + 1 1 y k G y d y =
ξ k + 1 1 y k G y d y ξ k + 1 μ 1 y k e μ y 1 d y
ξ k + 1 μ 0 y k e μ y 1 d y = ξ k + 1 μ e μ 0 y k e μ y d y =
ξ k + 1 e μ μ k 0 μ y k e μ y d μ y =
ξ k + 1 e μ μ k 0 z k e z d z = ξ k + 1 e μ μ k Γ k + 1 .
Therefore, we have proved that
ξ x k G x ξ d x μ k e μ Γ k + 1 ξ k + 1 < .
According to (36), (39) and (43), we derive
c k , ξ * 1 ξ ξ k + 1 + 2 μ k e μ Γ k + 1 ξ k + 1 =
ξ k + 2 μ k e μ Γ k + 1 ξ k = 1 + 2 μ k e μ k ! ξ k < ,
and c k , ξ * 0 , as ξ 0 ; k = 1 , , n .
Next, we give the following.
Theorem 7.
It holds that
t n 1 + t ξ r d μ ξ t < ; r , n N ,
for
d μ ξ x = 1 ξ G x ξ d x , 0 < ξ 1 .
Also, this integral converges to zero, as ξ 0 .
Proof. 
We observe that
t n 1 + t ξ r d μ ξ t 2 r 1 t n 1 + t r ξ r d μ ξ t =
2 r 1 t n d μ ξ t + 1 ξ r t n + r d μ ξ t .
Therefore, we have
1 ξ x n 1 + x ξ r G x ξ d x
2 r 1 1 ξ x n G x ξ d x + 1 ξ r 1 ξ x n + r G x ξ d x ( 44 )
2 r 1 1 + 2 μ n e μ n ! ξ n + 1 ξ r 1 + 2 μ n + r e μ n + r ! ξ n + r =
2 r 1 1 + 2 μ n e μ n ! + 1 + 2 μ n + r e μ n + r ! ξ n < ,
and it converges to zero, as ξ 0 .

3.2. About the q-Deformed and λ -Parametrized Hyperbolic Tangent Function g q , λ

We consider the activation function g q , λ and study its related properties; all the basics come from [7], Ch. 17.
Let the activation function
g q , λ x = e λ x q e λ x e λ x + q e λ x , λ , q > 0 , x R .
It is
g q , λ 0 = 1 q 1 + q ,
and
g q , λ x = g 1 q , λ x , x R ,
with
g q , λ + = 1 , g q , λ = 1 .
We consider the function
M q , λ x : = 1 4 g q , λ x + 1 g q , λ x 1 > 0 ,
x R , q , λ > 0 . We have M q , λ ± = 0 , so that the x-axis is a horizontal asymptote.
It holds that
M q , λ x = M 1 q , λ x , x R , q , λ > 0 ,
and
M 1 q , λ x = M q , λ x , x R .
The M q , λ maximum is
M q , λ ln q 2 λ = tanh λ 2 , λ > 0 .
Theorem 8.
We find that
i = M q , λ x i = 1 , x R , λ , q > 0 .
Theorem 9.
It holds that
M q , λ x d x = 1 , λ , q > 0 .
Therefore, M q , λ is a density function on R ;   λ , q > 0 .
Remark 3.
(i) Let x 1 . That is, 0 x 1 < x + 1 . According to mean value theorem, we obtain
M q , λ x = 1 4 g q , λ x + 1 g q , λ x 1 = 1 4 · 2 · 4 q λ e 2 λ ξ e 2 λ ξ + q 2 = 2 q λ e 2 λ ξ e 2 λ ξ + q 2 ,
for some 0 x 1 < ξ < x + 1 ;   λ , q > 0 .
But e 2 λ ξ < e 2 λ ξ + q , and
M q , λ x < 2 q λ e 2 λ ξ + q e 2 λ ξ + q 2 = 2 q λ e 2 λ ξ + q < 2 q λ e 2 λ x 1 + q < 2 q λ e 2 λ x 1 ,
x 1 .
That is,
M q , λ x < 2 q λ e 2 λ x 1 , x 1 .
Set μ : = 2 λ ; then,
M q , λ x < q μ e μ x 1 , x 1 .
(ii) Now, let x 1 . That is, x 1 < x + 1 0 . Again, we have
M q , λ x < 2 q λ e 2 λ ξ + q ,
x 1 < ξ < x + 1 0 ;   λ , q > 0 .
We have
e 2 λ x 1 < e 2 λ ξ < e 2 λ x + 1 ,
and
e 2 λ x 1 + q < e 2 λ ξ + q < e 2 λ x + 1 + q .
Hence,
1 e 2 λ ξ + q < 1 e 2 λ x 1 + q .
Therefore, it holds that
M q , λ x < 2 q λ e 2 λ x 1 + q < 2 q λ e 2 λ x 1 , x 1 .
That is,
M q , λ x < 2 q λ e 2 λ x 1 , x 1 .
Set μ : = 2 λ ; then,
M q , λ x < q μ e μ x 1 , x 1 .
We have proved that
M q , λ x < q μ e μ x 1 ,
x ( , 1 ] [ 1 , + ) = R 1 , 1 .
Let 0 < ξ 1 ; it holds that
M q , λ x ξ < q μ e μ x ξ 1 , x ξ , or x ξ .
According to Theorem 9, we find that
1 ξ M q , λ x ξ d x = 1 .
Therefore, 1 ξ M q , λ x ξ is a density function and let
d μ ξ x : = 1 ξ M q , λ x ξ d x ,
that is, μ ξ is a Borel probability measure.
We give the following.
Theorem 10.
Let
c ¯ k , ξ : = 1 ξ x k M q , λ x ξ d x , k = 1 , , n N .
Then, c ¯ k , ξ are finite and c ¯ k , ξ 0 , as ξ 0 .
Proof. 
We can write
c ¯ k , ξ = 1 ξ x k M q , λ x ξ d x =
1 ξ ξ x k M q , λ x ξ d x + ξ ξ x k M q , λ x ξ d x + ξ x k M q , λ x ξ d x .
We notice that
ξ ξ x k M q , λ x ξ d x ξ ξ x k M q , λ x ξ d x
ξ k ξ ξ M q , λ x ξ d x ξ k + 1 M q , λ x ξ d x ξ = ξ k + 1 < .
Next, we have
ξ x k M q , λ x ξ d x ξ x k M q , λ x ξ d x = ξ x k M q , λ x ξ d x =
ξ k + 1 ξ x ξ k M q , λ x ξ d x ξ
( ξ x < 1 x ξ = : y < )
= ξ k + 1 1 y k M q , λ y d y q μ ξ k + 1 1 y k e μ y 1 d y
q μ ξ k + 1 e μ 0 y k e μ y d y = q ξ k + 1 e μ μ k 0 μ y k e μ y d μ y =
q μ k e μ ξ k + 1 0 z k e z d z = q μ k e μ ξ k + 1 Γ k + 1 .
We have established that
ξ x k M q , λ x ξ d x q μ k e μ Γ k + 1 ξ k + 1 < .
Finally, we observe that
ξ x k M q , λ x ξ d x = ξ k + 1 ξ x ξ k M q , λ x ξ d x ξ
< x ξ < x ξ 1
= ξ k + 1 1 z k M q , λ z d z ξ k + 1 1 z k M q , λ z d z = ( 52 )
ξ k + 1 1 z k M 1 q , λ z d z
< z 1 > z = : y 1
= ξ k + 1 1 y k M 1 q , λ y d y =
ξ k + 1 1 y k M 1 q , λ y d y ( 59 ) 1 q ξ k + 1 μ 1 y k e μ y 1 d y
1 q ξ k + 1 μ 0 y k e μ y 1 d y = 1 q ξ k + 1 μ e μ 0 y k e μ y d y =
1 q ξ k + 1 e μ μ k 0 μ y k e μ y d μ y =
ξ k + 1 e μ μ k 1 q 0 z k e z d z = 1 q ξ k + 1 e μ μ k Γ k + 1 .
Therefore, we have proved that
ξ x k M 1 q , λ x ξ d x 1 q μ k e μ Γ k + 1 ξ k + 1 < .
According to (72), (75) and (78), we obtain
c ¯ k , ξ 1 ξ ξ k + 1 + q μ k e μ Γ k + 1 ξ k + 1 + 1 q μ k e μ Γ k + 1 ξ k + 1 =
ξ k + q + 1 q μ k e μ Γ k + 1 ξ k = 1 + q + 1 q μ k e μ k ! ξ k < ,
and c ¯ k , ξ 0 , as ξ 0 ; k = 1 , , n .
Furthermore, we present the results.
Theorem 11.
It holds that
I M , ξ : = 1 ξ t n 1 + t ξ r M q , λ t ξ d t < ;
where λ , q > 0 ;   r , n N ; 0 < ξ 1 .
Also, I M , ξ 0 , as ξ 0 .
Proof. 
We find that
1 ξ t n 1 + t ξ r M q , λ t ξ d t
2 r 1 ξ t n 1 + t r ξ r M q , λ t ξ d t =
2 r 1 1 ξ t n M q , λ t ξ d t + 1 ξ r 1 ξ t n + r M q , λ t ξ d t ( 79 )
2 r 1 1 + q + 1 q μ n e μ n ! ξ n + 1 ξ r 1 + q + 1 q μ n + r e μ n + r ! ξ n + r
= 2 r 1 1 + q + 1 q μ n e μ n ! + 1 + q + 1 q μ n + r e μ n + r ! ξ n < .
and it converges to zero, as ξ 0 .

3.3. About the Gudermannian Generated Activation Function

Here, we follow [6], Ch. 2.
Let the related normalized generator sigmoid function be
f x : = 8 π 0 x 1 e t + e t d t , x R ,
and the neural network activation function be
ψ x : = 1 4 f x + 1 f x 1 > 0 , x R .
We mention the following.
Theorem 12.
It holds that
ψ x d x = 1 .
Therefore, ψ x is a density function.
According to [6], p. 49, we found that
ψ x < 2 π cosh x 1 , x 1 .
But
1 cosh x 1 = 2 e x 1 + e x 1 < 2 e x 1 = 2 e x 1 ,
x R .
Therefore,
ψ x < 4 π e x 1 = 4 π e e x , x 1 .
So the following,
d μ ξ x = 1 ξ ψ x ξ d x , 0 < ξ 1 ,
is the related Borel probability measure.
We give the following results; proofs similar to Theorems 6 and 7 are omitted.
Theorem 13.
Let 0 < ξ 1 , and
γ k , ξ : = 1 ξ x k ψ x ξ d x , k = 1 , , n N .
Then, γ k , ξ are finite and γ k , ξ 0 , as ξ 0 .
Theorem 14.
It holds that
1 ξ t n 1 + t ξ r ψ t ξ d t < ;
r , n N ; 0 < ξ 1 .
Also, this integral converges to zero, as ξ 0 .

3.4. About the q-Deformed and λ -Parametrized Logistic-Type Activation Function

Here, everything comes from [7], Ch. 15.
The activation function now is
φ q , λ x : = 1 1 + q e λ x , x R ,
where q , λ > 0 .
The density function here will be
G q , λ x : = 1 2 φ q , λ x + 1 φ q , λ x 1 > 0 , x R .
We mention the following.
Theorem 15.
It holds that
G q , λ x d x = 1 .
According to [7], p. 373, we find
G q , λ x < q λ e λ x 1 , x 1 .
So the following
d μ ξ x = 1 ξ G q , λ x ξ d x , 0 < ξ 1 ,
is the related Borel probability measure.
We give the following results; proofs similar to Theorems 10 and 11 are omitted.
Theorem 16.
Let
δ ¯ k , ξ : = 1 ξ x k G q , λ x ξ d x , k = 1 , , n N .
Then, δ ¯ k , ξ are finite and δ ¯ k , ξ 0 , as ξ 0 .
Theorem 17.
It holds that
I G q , λ , ξ : = 1 ξ t n 1 + t ξ r G q , λ t ξ d t < ;
where λ , q > 0 ;   r , n N ; 0 < ξ 1 .
Also, I G q , λ , ξ 0 , as ξ 0 .

3.5. About the q-Deformed and β -Parametrized Half-Hyperbolic Tangent Function φ q , β

Here, everything comes from [7], Ch. 19.
The activation function now is
φ q , β x : = 1 q e β t 1 + q e β t , t R ,
where q , β > 0 .
The corresponding density function will be
Φ q , β x : = 1 4 φ q , β x + 1 φ q , β x 1 > 0 , x R .
Theorem 18.
Φ q , β x d x = 1 .
According to [7], p. 481, we find that
Φ q , β x < β q e β x 1 , x 1 .
Thus, the following
d μ ξ x = 1 ξ Φ q , β x ξ d x , 0 < ξ 1 ,
is the related Borel probability measure.
We state the following results; proofs similar to Theorems 10 and 11 are omitted.
Theorem 19.
Let
ε k , ξ : = 1 ξ x k Φ q , β x ξ d x , k = 1 , , n N .
Then, ε k , ξ are finite and ε k , ξ 0 , as ξ 0 .
Theorem 20.
It holds that
I Φ q , β , ξ : = 1 ξ t n 1 + t ξ r Φ q , β t ξ d t < ;
where q , β > 0 ;   r , n N ; 0 < ξ 1 .
Also, I Φ q , β , ξ 0 , as ξ 0 .
In Section 2, we generally assumed that Θ r , ξ f , x R , for all x R .
The next result proves that this is easily possible.
Theorem 21.
Let f : R R be a Lipschitz function, that is, there exists K > 0 such that f x f y K x y , for any x , y R . Here, μ ξ is a general Borel probability measure on R with the property t d μ ξ t as finite. Then, Θ r , ξ f , x , defined by (2), is finite, i.e., Θ r , ξ f ; x R , x R .
Proof. 
We find that
Θ r , ξ f , x f x = j = 1 r α j f x + j t f x d μ ξ t .
Thus,
Θ r , ξ f , x f x j = 1 r α j f x + j t f x d μ ξ t
j = 1 r α j j K t d μ ξ t = : F f < .
That is,
Θ r , ξ f , x f x F f < .
Notice that
Θ r , ξ f , x = Θ r , ξ f , x f x + f x ,
and
Θ r , ξ f , x F f + f x < ,
proving the claim. □

4. Main Results

Here, we describe the pointwise and uniform approximation properties of the following activated singular operators, which are special cases of Θ r , ξ f , x ; see Section 2. Their definitions are based on Section 3. Essentially, we apply our listed results in Section 2.
Definition 1.
Let f : R R be Borel-measurable and α j as in (1), x R , 0 < ξ 1 .
We call
(1)
Θ 1 , r , ξ f , x = 1 ξ j = 1 r α j f x + j t G t ξ d t ,
(2)
Θ 2 , r , ξ f , x = 1 ξ j = 1 r α j f x + j t M q , λ t ξ d t , q , λ > 0 ,
(3)
Θ 3 , r , ξ f , x = 1 ξ j = 1 r α j f x + j t ψ t ξ d t ,
(4)
Θ 4 , r , ξ f , x = 1 ξ j = 1 r α j f x + j t G q , λ t ξ d t , q , λ > 0 ,
and
(5)
Θ 5 , r , ξ f , x = 1 ξ j = 1 r α j f x + j t Φ q , β t ξ d t , q , β > 0 .
We give the following results grouped by operator.
Theorem 22.
It holds that
Θ 1 , r , ξ f , x f x k = 1 n f k x k ! δ k c k , ξ * 1 ξ G ˜ n t G t ξ d t .
The last, at x = 0 , is attained by f x = x r + n , r , n N with r + n even.
Proof. 
Theorems 1 and 6. □
Corollary 3.
Assume ω r f , ξ < , 0 < ξ 1 .
Then,
Θ 1 , r , ξ f , x f x 1 ξ ω r f , t G t ξ d t ,
which is attained at x = 0 by f x = x r , where r is even.
Remark 4.
We also have the uniform estimates
Θ 1 , r , ξ f , x f x k = 1 n f k x k ! δ k c k , ξ * , x 1 ξ G ˜ n t G t ξ d t ,
and
Θ 1 , r , ξ f f 1 ξ ω r f , t G t ξ d t .
And the following holds.
Theorem 23.
Let f C n R , n Z + . Assume also that ω r f n , h < , h > 0 .
Then,
Θ 1 , r , ξ f , x f x k = 1 n f k x k ! δ k c k , ξ * , x
ω r f n , ξ n ! 1 ξ t n 1 + t ξ r G t ξ d t .
At n = 0 , the sum on the L.H.S. (118) collapses.
Proof. 
Theorems 3 and 7. □
We continue similarly.
Theorem 24.
It holds that
Θ 2 , r , ξ f , x f x k = 1 n f k x k ! δ k c ¯ k , ξ 1 ξ G ˜ n t M q , λ t ξ d t ,
q , λ > 0 .
The last, at x = 0 , is attained by f x = x r + n , r , n N with r + n even.
Proof. 
Theorems 1 and 10. □
Corollary 4.
Assume that ω r f , ξ < , 0 < ξ 1 .
Then,
Θ 2 , r , ξ f , x f x 1 ξ ω r f , t M q , λ t ξ d t ,
which is attained at x = 0 by f x = x r , where r is even.
Remark 5.
We also have the uniform estimates
Θ 2 , r , ξ f , x f x k = 1 n f k x k ! δ k c ¯ k , ξ , x 1 ξ G ˜ n t M q , λ t ξ d t ,
and
Θ 2 , r , ξ f f 1 ξ ω r f , t M q , λ t ξ d t .
And the following holds.
Theorem 25.
Let f C n R , n Z + . Assume also that ω r f n , h < , ∀ h > 0 . Then,
Θ 2 , r , ξ f , x f x k = 1 n f k x k ! δ k c ¯ k , ξ , x
ω r f n , ξ n ! 1 ξ t n 1 + t ξ r M q , λ t ξ d t .
At n = 0 , the sum on the L.H.S. (123) collapses.
Proof. 
Theorems 3 and 11. □
We continue similarly.
Theorem 26.
It holds that
Θ 3 , r , ξ f , x f x k = 1 n f k x k ! δ k γ k , ξ 1 ξ G ˜ n t ψ t ξ d t .
The last, at x = 0 , is attained by f x = x r + n , r , n N with r + n even.
Proof. 
Theorems 1 and 13. □
Corollary 5.
Assume that ω r f , ξ < , 0 < ξ 1 . Then,
Θ 3 , r , ξ f , x f x 1 ξ ω r f , t ψ t ξ d t ,
which is attained at x = 0 by f x = x r , where r is even.
Remark 6.
We also have the uniform estimates
Θ 3 , r , ξ f , x f x k = 1 n f k x k ! δ k γ k , ξ 1 ξ G ˜ n t ψ t ξ d t ,
and
Θ 3 , r , ξ f f 1 ξ ω r f , t ψ t ξ d t .
And the following holds.
Theorem 27.
Let f C n R , n Z + . Assume also that ω r f n , h < , h > 0 . Then,
Θ 3 , r , ξ f , x f x k = 1 n f k x k ! δ k γ k , ξ , x
ω r f n , ξ n ! 1 ξ t n 1 + t ξ r ψ t ξ d t .
At n = 0 , the sum on the L.H.S. (128) collapses.
Proof. 
Theorems 3 and 14. □
We continue on this fashion.
Theorem 28.
It holds that
Θ 4 , r , ξ f , x f x k = 1 n f k x k ! δ k δ ¯ k , ξ 1 ξ G ˜ n t G q , λ t ξ d t ,
q , λ > 0 .
The last, at x = 0 , is attained by f x = x r + n , r , n N with r + n even.
Proof. 
Theorems 1 and 16. □
Corollary 6.
Assume that ω r f , ξ < , 0 < ξ 1 .
Then,
Θ 4 , r , ξ f , x f x 1 ξ ω r f , t G q , λ t ξ d t ,
which is attained at x = 0 by f x = x r , where r is even.
Remark 7.
We also have the uniform estimates
Θ 4 , r , ξ f , x f x k = 1 n f k x k ! δ k δ ¯ k , ξ 1 ξ G ˜ n t G q , λ t ξ d t ,
and
Θ 4 , r , ξ f f 1 ξ ω r f , t G q , λ t ξ d t .
And the following holds.
Theorem 29.
Let f C n R , n Z + . Assume also that ω r f n , h < , h > 0 . Then,
Θ 4 , r , ξ f , x f x k = 1 n f k x k ! δ k δ ¯ k , ξ
ω r f n , ξ n ! 1 ξ t n 1 + t ξ r G q , λ t ξ d t .
At n = 0 , the sum on the L.H.S. (133) collapses.
Proof. 
Theorems 3 and 17. □
The last group of results follows.
Theorem 30.
It holds that
Θ 5 , r , ξ f , x f x k = 1 n f k x k ! δ k ε k , ξ 1 ξ G ˜ n t Φ q , β t ξ d t ,
q , β > 0 .
The last, at x = 0 , is attained by f x = x r + n , r , n N with r + n even.
Proof. 
Theorems 1 and 19. □
Corollary 7.
Assume that ω r f , ξ < , 0 < ξ 1 .
Then,
Θ 5 , r , ξ f , x f x 1 ξ ω r f , t Φ q , β t ξ d t ,
which is attained at x = 0 by f x = x r , where r is even.
Remark 8.
We also have the uniform estimates
Θ 15 r , ξ f , x f x k = 1 n f k x k ! δ k ε k , ξ 1 ξ G ˜ n t Φ q , β t ξ d t ,
and
Θ 5 , r , ξ f f 1 ξ ω r f , t Φ q , β t ξ d t .
And the following holds.
Theorem 31.
Let f C n R , n Z + . Assume also that ω r f n , h < , h > 0 . Then,
Θ 5 , r , ξ f , x f x k = 1 n f k x k ! δ k ε k , ξ
ω r f n , ξ n ! 1 ξ t n 1 + t ξ r Φ q , β t ξ d t .
At n = 0 , the sum on the L.H.S. (138) collapses.
Proof. 
Theorems 3 and 20. □

5. Conclusions

Here, we presented the new idea of moving from the neural networks’ main tools, the activation functions, to singular integral approximation. This is a rare case of employing applied mathematics in theoretical cases.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Anastassiou, G.A.; Gal, S. Convergence of generalized singular integrals to the unit, univariate case. Math. Inequal. Appl. 2000, 3, 511–518. [Google Scholar] [CrossRef]
  2. Gal, S.G. Remark on the degree of approximation of continuous functions by singular integrals. Math. Nachr. 1993, 164, 197–199. [Google Scholar] [CrossRef]
  3. Gal, S.G. Degree of approximation of continuous functions by some singular integrals. Rev. d’Anal. Numér. Théor. l’Approx. 1998, 27, 251–261. [Google Scholar]
  4. Mohapatra, R.N.; Rodriguez, R.S. On the rate of convergence of singular integrals for Hölder continuous functions. Math. Nachr. 1990, 149, 117–124. [Google Scholar] [CrossRef]
  5. Anastassiou, G.; Mezei, R. Approximation by Singular Integrals; Cambridge Scientific Publishers: Cambridge, UK, 2012. [Google Scholar]
  6. Anastassiou, G.A. Banach Space Valued Neural Network; Springer: Heidelberg, Germany; New York, NY, USA, 2023. [Google Scholar]
  7. Anastassiou, G.A. Parametrized, Deformed and General Neural Networks; Springer: Heidelberg, Germany; New York, NY, USA, 2023. [Google Scholar]
  8. Aral, A. On a generalized Gauss Weierstrass singular integral. Fasc. Math. 2005, 35, 23–33. [Google Scholar]
  9. Aral, A. Pointwise approximation by the generalization of Picard and Gauss-Weierstrass singular integrals. J. Concr. Appl. Math. 2008, 6, 327–339. [Google Scholar]
  10. Aral, A. On generalized Picard integral operators. In Advances in Summability and Approximation Theory; Springer: Singapore, 2018; pp. 157–168. [Google Scholar]
  11. Aral, A.; Deniz, E.; Erbay, H. The Picard and Gauss-Weiertrass singular integrals in (p, q)-calculus. Bull. Malays. Math. Sci. Soc. 2020, 43, 1569–1583. [Google Scholar] [CrossRef]
  12. Aral, A.; Gal, S.G. q-generalizations of the Picard and Gauss-Weierstrass singular integrals. Taiwan. J. Math. 2008, 12, 2501–2515. [Google Scholar] [CrossRef]
  13. DeVore, R.A.; Lorentz, G.G. Constructive Approximation; Springer: Berlin, Germany; New York, NY, USA, 1993; Volume 303. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Anastassiou, G.A. Quantitative Uniform Approximation by Activated Singular Operators. Mathematics 2024, 12, 2152. https://doi.org/10.3390/math12142152

AMA Style

Anastassiou GA. Quantitative Uniform Approximation by Activated Singular Operators. Mathematics. 2024; 12(14):2152. https://doi.org/10.3390/math12142152

Chicago/Turabian Style

Anastassiou, George A. 2024. "Quantitative Uniform Approximation by Activated Singular Operators" Mathematics 12, no. 14: 2152. https://doi.org/10.3390/math12142152

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop