Next Article in Journal
Symmetries and Invariant Solutions of Higher-Order Evolution Systems
Previous Article in Journal
Strong Decays of the ϕ(2170) as a Fully Strange Tetraquark State
Previous Article in Special Issue
The Variation of Constants Formula in Lebesgue Spaces with Variable Exponents
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Degree of Lp Approximation Using Activated Singular Integrals

by
George A. Anastassiou
Department of Mathematical Sciences, University of Memphis, Memphis, TN 38152, USA
Symmetry 2024, 16(8), 1022; https://doi.org/10.3390/sym16081022
Submission received: 24 June 2024 / Revised: 10 July 2024 / Accepted: 22 July 2024 / Published: 10 August 2024
(This article belongs to the Special Issue Nonlinear Analysis and Its Applications in Symmetry II)

Abstract

:
In this article we present the L p , p 1 , approximation properties of activated singular integral operators over the real line. We establish their approximation to the unit operator with rates. The kernels here come from neural network activation functions and we employ the related density functions. The derived inequalities use the high order L p modulus of smoothness.

1. Introduction

The approximation properties of singular integrals have been established earlier in [1,2,3,4]. The classic monograph [5], Ch. 15, inspires us and is the driving force in this paper. Here we study some activated singular integral operators over R and we determine the degree of their L p , p 1 , approximation to the unit operator with rates by the use of smooth functions. We derive related inequalities involving the high L p , p 1 , modulus of smoothness. Our studied operators are not in general positive. The surprising fact here is the reverse process from applied mathematics to theoretical ones. Our kernels here are derived by density functions coming from activation functions related to neural networks approximation, see [6,7]. Of great interest and motivating the author are also the articles [8,9,10,11,12]. In recent intense mathematical activity by the use of neural networks in solving numerically differential equations our current work is expected to play a pivotal role, as in the classic case played the earlier versions of singular integrals.
Regarding the history of the topic we make reference to the 2012 monograph [5] from 2012, which was the first comprehensive work to address the traditional theory of approximation by singular integral operators to the identity-unit operator in its entirety. The fundamental approximation features of the generic Picard, Gauss-Weierstrass, Poisson-Cauchy and Trigonometric singular integral operators over the real line were presented. These are not positive linear operators. They specifically looked into the rate at which these operators converge to the unit operator and their associated simultaneous approximation. This is provided by use of high order modulus of smoothness of the high order derivative of the engaged function via inequalities. It has been shown that some of these inequalities are sharp, in fact they are attained.

2. Essential Background

Everything in this section comes from [5], Ch. 15. In the following we mention and deal with the smooth general singular integral operators Θ r , ξ f , x defined as follows.
For r N and n Z + , we set
α j : = 1 r j r j j n , j = 1 , , r . 1 j = 1 r 1 r j r j j n , j = 0 ,
that is j = 0 r α j = 1 . Let ξ > 0 , and let μ ξ be Borel probability measures on R .
Let f C n R and f n L p R , 1 p < ; we define for x R , ξ > 0 the integral
Θ r , ξ f , x : = j = 0 r α j f x + j t d μ ξ t .
The Θ r , ξ operators are not in general positive operators; see [5].
We notice that Θ r , ξ c , x = c , c constant, and
Θ r , ξ f , x f x = j = 0 r α j f x + j t f x d μ ξ t .
We need the rth L p -modulus of smoothness
ω r f n , h p : = sup t h Δ t r f n x p , x , h > 0 ,
where
Δ t r f n x : = j = 0 r 1 r j r j f n x + j t ,
see [13], p. 44. Here, we have ω r f n , h p < , h > 0 .
We need to introduce
δ k : = j = 0 r α j j k , k = 1 , , n N .
Call
τ w , x : = j = 0 r α j j n f n x + j w δ n f n x .
Notice also that
j = 1 r 1 r j r j = 1 r r 0 .
According to [5], we get
τ w , x = Δ w r f n x .
Thus,
τ w , x p , x ω r f n , w p , w R .
Using Taylor’s formula, one has
j = 0 r α j f x + j t f x = k = 1 n f k x k ! δ k t k + R n 0 , t , x ,
where
R n , t , x : = 0 t t w n 1 n 1 ! τ w , x d w , n N .
Assume
c k , ξ : = t k d μ ξ t R , k = 1 , , n .
Using the above terminology, we derive
Δ x : = Θ r , ξ f ; x f x k = 1 n f k x k ! δ k c k , ξ = R n * x ,
where
R n * x : = R n 0 , t , x d μ ξ t , n N .
We mention the first result.
Theorem 1
([5]). Let p , q > 1 , such that 1 p + 1 q = 1 , n N and the rest as above. Furthermore, assume that
M ξ : = 1 + t ξ r p + 1 1 t n p 1 d μ ξ t < .
Then,
Δ x p 1 n 1 ! q n 1 + 1 1 q r p + 1 1 p
1 + t ξ r p + 1 1 t n p 1 d μ ξ t 1 p ξ 1 p ω r f n , ξ p .
If M ξ λ ¯ , ξ < 0 , λ ¯ > 0 , and as ξ 0 we get that Δ x p 0 .
The counterpart of Theorem 1 follows in case of p = 1 .
Theorem 2
([5]). Let f C n R and f n L 1 R , n N . Assume that
1 + t ξ r + 1 1 t n 1 d μ ξ t < .
Then,
Δ x 1 1 r + 1 n 1 !
1 + t ξ r + 1 1 t n 1 d μ ξ t ξ ω r f n , ξ 1 .
Additionally, assume that
1 + t ξ r + 1 1 t n 1 d μ ξ t λ ¯ , λ ¯ > 0 ,
ξ > 0 . Hence, as ξ 0 , we obtain Δ x 1 0 .
The case n = 0 follows.
Proposition 1.
Let p , q > 1 , such that 1 p + 1 q = 1 , and the rest as above. Assume that
ρ ξ : = 1 + t ξ r p d μ ξ t < .
Then,
Θ r , ξ f f p ω r f , ξ p 1 + t ξ r p d μ ξ t 1 p .
Additionally, assume that ρ ξ λ ¯ , λ ¯ > 0 , ξ > 0 ; then, as ξ 0 , we obtain Θ r , ξ unit operator I in the L p norm, p > 1 .
We finally need
Proposition 2.
Assume
1 + t ξ r d μ ξ t < .
Then,
Θ r , ξ f f 1 ω r f , ξ 1 1 + t ξ r d μ ξ t .
Additionally, assuming that
1 + t ξ r d μ ξ t λ ¯ , λ ¯ > 0 ,
ξ > 0 , we obtain as ξ 0 that Θ r , ξ I in the L 1 norm.
We will apply the above theory to our activated singular integral operators; see Section 5.

3. Basics of Activation Functions

Here everything comes from [14].

3.1. On Richards’s Curve

Here, we follow [7], Chapter 1.
A Richards is curve is
φ x = 1 1 + e μ x ; x R , μ > 0 ,
which is strictly increasing on R , and it is a sigmoid function; in particular, this is a generalized logistic function. And it is an activation function in neural networks; see [7], chapter 1.
It is
lim x + φ x = 1 and lim x φ x = 0 .
We consider the function
G x = 1 2 φ x + 1 φ x 1 , x R ,
which is G ( x ) > 0 , and all x R .
It is
φ 0 = 1 2 , φ x = 1 φ x ,
and
G x = G x , x R .
We also have
G 0 = e μ 1 2 e μ + 1 .
We also get
lim x + G x = lim x G x = 0 ,
and G is a bell symmetric function with maximum
G 0 = e μ 1 2 ( e μ + 1 ) .
Theorem 3.
It holds that
i = G x i = 1 , x R .
Theorem 4.
It holds that
G x d x = 1 .
So, G is a density function.
We make
Remark 1.
So, we have
G x = 1 2 φ x + 1 φ x 1 , x R .
(i) Let x 1 . That is, 0 x 1 < x + 1 . Applying the mean value theorem, we get:
G x = 1 2 2 φ η = φ η = μ e μ η 1 + e μ η 2 , μ > 0 ,
where 0 x 1 < η < x + 1 .
Notice that
G x < μ e μ η < μ e μ x 1 , x 1 .
(ii) Now, let x 1 . That is, x 1 < x + 1 0 . Applying again the mean value theorem we get:
G x = 1 2 2 φ η = φ η = μ e μ η 1 + e μ η 2 ,
where x 1 < η < x + 1 0 .
Hence, we derive that
G x < μ e μ η < μ e μ x 1 , x 1 .
Consequently, we proved that
G x < μ e μ x 1 , x ( , 1 ] [ 1 , + ) = R 1 , 1 .
Let 0 < ξ 1 ; it holds that
G x ξ < μ e μ x ξ 1 , x ξ , o r x ξ .
Clearly, by Theorem 4, we have that
1 ξ G x ξ d x = 1 .
So that 1 ξ G x ξ is a density function, and let d μ ξ x : = 1 ξ G x ξ d x , that is μ ξ is a Borel probability measure.
We give the following essential result.
Theorem 5.
Let 0 < ξ 1 , and
c k , ξ * : = 1 ξ x k G x ξ d x , k = 1 , , n N .
Then, c k , ξ * are finite and c k , ξ * 0 , as ξ 0 .
In fact it holds that
c k , ξ * 1 + 2 μ k e μ k ! ξ k < ,
for k = 1 , , n .
Next we present
Theorem 6.
It holds that
t n 1 + t ξ r d μ ξ t < ; r , n N ,
for
d μ ξ x = 1 ξ G x ξ d x , 0 < ξ 1 .
Also, this integral converges to zero, as ξ 0 .
In fact, it holds that
1 ξ x n 1 + x ξ r G x ξ d x
2 r 1 1 + 2 μ n e μ n ! + 1 + 2 μ n + r e μ n + r ! ξ n < .

3.2. On the q-Deformed and λ -Parametrized Hyperbolic Tangent Function g q , λ

We consider the activation function g q , λ , and study its related properties; all of the basics come from [7], ch. 17.
Let the activation function be
g q , λ x = e λ x q e λ x e λ x + q e λ x , λ , q > 0 , x R .
It is
g q , λ 0 = 1 q 1 + q ,
and
g q , λ x = g 1 q , λ x , x R ,
with
g q , λ + = 1 , g q , λ = 1 .
We consider the function
M q , λ x : = 1 4 g q , λ x + 1 g q , λ x 1 > 0 ,
x R , q , λ > 0 . We have M q , λ ± = 0 , so that the x-axis is a horizontal asymptote.
It holds that
M q , λ x = M 1 q , λ x , x R , q , λ > 0 ,
and
M 1 q , λ x = M q , λ x , x R .
The M q , λ maximum is
M q , λ ln q 2 λ = tanh λ 2 , λ > 0 .
Theorem 7.
We have that
i = M q , λ x i = 1 , x R , λ , q > 0 .
Theorem 8.
It holds that
M q , λ x d x = 1 , λ , q > 0 .
So, M q , λ is a density function on R ; λ , q > 0 .
Remark 2.
(i) Let x 1 . That is, 0 x 1 < x + 1 . By the mean value theorem we obtain
M q , λ x = 1 4 g q , λ x + 1 g q , λ x 1 = 1 4 · 2 · 4 q λ e 2 λ ξ e 2 λ ξ + q 2 = 2 q λ e 2 λ ξ e 2 λ ξ + q 2 ,
for some 0 x 1 < ξ < x + 1 ; λ , q > 0 .
But e 2 λ ξ < e 2 λ ξ + q , and
M q , λ x < 2 q λ e 2 λ ξ + q e 2 λ ξ + q 2 = 2 q λ e 2 λ ξ + q < 2 q λ e 2 λ x 1 + q < 2 q λ e 2 λ x 1 ,
x 1 .
That is,
M q , λ x < 2 q λ e 2 λ x 1 , x 1 .
Set μ : = 2 λ , then
M q , λ x < q μ e μ x 1 , x 1 .
(ii) Let now x 1 . That is, x 1 < x + 1 0 . Again, we have
M q , λ x < 2 q λ e 2 λ ξ + q ,
x 1 < ξ < x + 1 0 ;   λ , q > 0 .
We have
e 2 λ x 1 < e 2 λ ξ < e 2 λ x + 1 ,
and
e 2 λ x 1 + q < e 2 λ ξ + q < e 2 λ x + 1 + q .
Hence,
1 e 2 λ ξ + q < 1 e 2 λ x 1 + q .
Therefore, it holds that
M q , λ x < 2 q λ e 2 λ x 1 + q < 2 q λ e 2 λ x 1 , x 1 .
That is
M q , λ x < 2 q λ e 2 λ x 1 , x 1 .
Set μ : = 2 λ ; then,
M q , λ x < q μ e μ x 1 , x 1 .
We have proved that
M q , λ x < q μ e μ x 1 ,
x ( , 1 ] [ 1 , + ) = R 1 , 1 .
Let 0 < ξ 1 ; it holds that
M q , λ x ξ < q μ e μ x ξ 1 , x ξ , o r x ξ .
By Theorem 8, we have
1 ξ M q , λ x ξ d x = 1 .
So that 1 ξ M q , λ x ξ is a density function, and let
d μ ξ x : = 1 ξ M q , λ x ξ d x ,
that is μ ξ is a Borel probability measure.
We give
Theorem 9.
Let
c ¯ k , ξ : = 1 ξ x k M q , λ x ξ d x , k = 1 , , n N .
Then, c ¯ k , ξ are finite and c ¯ k , ξ 0 , as ξ 0 .
In fact, it holds that
c ¯ k , ξ 1 + q + 1 q μ k e μ k ! ξ k < , k = 1 , , n .
It also follows
Theorem 10.
It holds that ( λ , q > 0 ;   r , n N ; 0 < ξ 1 )
1 ξ t n 1 + t ξ r M q , λ t ξ d t
2 r 1 1 + q + 1 q μ n e μ n ! + 1 + q + 1 q μ n + r e μ n + r ! ξ n < ,
and it converges to zero, as ξ 0 .

3.3. On the Gudermannian Generated Activation Function

Here, we follow [6], Ch. 2.
Let the related normalized generator sigmoid function:
f x : = 8 π 0 x 1 e t + e t d t , x R ,
and the neural network activation function be
ψ x : = 1 4 f x + 1 f x 1 > 0 , x R .
We mention
Theorem 11.
It holds that
ψ x d x = 1 .
So that ψ x is a density function.
By [6], p. 49, we found that
ψ x < 2 π cosh x 1 , x 1 .
But
1 cosh x 1 = 2 e x 1 + e x 1 < 2 e x 1 = 2 e x 1 ,
x R .
Therefore, it is
ψ x < 4 π e x 1 = 4 π e e x , x 1 .
So here it is
d μ ξ x = 1 ξ ψ x ξ d x , 0 < ξ 1 ,
the related Borel probability measure.
We give the following results, their proofs as similar to Theorems 5, 6 are omitted.
Theorem 12.
Let 0 < ξ 1 , and
γ k , ξ : = 1 ξ x k ψ x ξ d x , k = 1 , , n N .
Then, γ k , ξ are finite and γ k , ξ 0 , as ξ 0 .
Theorem 13.
It holds
1 ξ t n 1 + t ξ r ψ t ξ d t < ;
r , n N ; 0 < ξ 1 .
Also, this integral converges to zero, as ξ 0 .

3.4. On the q-Deformed and λ -Parametrized Logistic Type Activation Function

Here, all come from [7], Ch. 15.
The activation function now is
φ q , λ x : = 1 1 + q e λ x , x R ,
where q , λ > 0 .
The density function here will be
G q , λ x : = 1 2 φ q , λ x + 1 φ q , λ x 1 > 0 , x R .
We mention
Theorem 14.
It holds that
G q , λ x d x = 1 .
By [7], p. 373, we have
G q , λ x < q λ e λ x 1 , x 1 .
So, here, it is
d μ ξ x = 1 ξ G q , λ x ξ d x , 0 < ξ 1 ,
the related Borel probability measure.
We give the following results, their proofs as similar to Theorems 9, 10 are omitted.
Theorem 15.
Let
δ ¯ k , ξ : = 1 ξ x k G q , λ x ξ d x , k = 1 , , n N .
Then, δ ¯ k , ξ are finite and δ ¯ k , ξ 0 , as ξ 0 .
Theorem 16.
It holds that
I G q , λ , ξ : = 1 ξ t n 1 + t ξ r G q , λ t ξ d t < ;
where λ , q > 0 ;   r , n N ; 0 < ξ 1 .
Also, I G q , λ , ξ 0 , as ξ 0 .

3.5. On the q-Deformed and β -Parametrized Half Hyperbolic Tangent Function φ q , β

Here, all come from [7], Ch. 19.
The activation function now is
φ q , β x : = 1 q e β t 1 + q e β t , t R ,
where q , β > 0 .
The corresponding density function will be
Φ q , β x : = 1 4 φ q , β x + 1 φ q , β x 1 > 0 , x R .
It holds
Theorem 17.
Φ q , β x d x = 1 .
By [7], p. 481, we have that
Φ q , β x < β q e β x 1 , x 1 .
Thus, here, it is
d μ ξ x = 1 ξ Φ q , β x ξ d x , 0 < ξ 1 ,
the related Borel probability measure.
We state the following results; their proofs as similar to Theorems 9, 10 are omitted.
Theorem 18.
Let
ε k , ξ : = 1 ξ x k Φ q , β x ξ d x , k = 1 , , n N .
Then, ε k , ξ are finite, and ε k , ξ 0 , as ξ 0 .
Theorem 19.
It holds that
I Φ q , β , ξ : = 1 ξ t n 1 + t ξ r Φ q , β t ξ d t < ;
where q , β > 0 ;   r , n N ; 0 < ξ 1 .
Also, I Φ q , β , ξ 0 , as ξ 0 .

4. More on Activation Probability Measures

We present
Theorem 20.
Let p > 1 , r N , 0 < ξ 1 , n N , l : = max r , n , · be the ceiling of the number, and h : = 2 l p + 1 . It holds that
1 ξ 1 + t ξ r p + 1 1 t n p 1 G t ξ d t
2 h 1 + 1 + 2 μ h e μ h ! < + .
Proof. 
We have, in general:
M ξ : = 1 + t ξ r p + 1 1 t n p + 1 d μ ξ t
1 + t ξ r p + 1 + 1 1 + t n p + 1 d μ ξ t
2 1 + t ξ r p + 1 1 + t n p + 1 d μ ξ t
2 1 + t ξ r p + 1 1 + t ξ n p + 1 d μ ξ t
l : = max r , n
2 1 + t ξ 2 l p + 1 d μ ξ t
2 · 2 2 l p + 1 1 1 + t 2 l p + 1 ξ 2 l p + 1 d μ ξ t =
2 2 l p + 1 1 + 1 ξ 2 l p + 1 t 2 l p + 1 d μ ξ t
( call h : = 2 l p + 1 )
= 2 h 1 + 1 ξ h t h d μ ξ t
( setting d μ ξ x = 1 ξ G x ξ d x )
= 2 h 1 + 1 ξ h 1 ξ x h G x ξ d x ( 44 )
2 h 1 + 1 ξ h 1 + 2 μ h e μ h ! ξ h = 2 h 1 + 1 + 2 μ h e μ h ! < + .
We continue with
Theorem 21.
Let r , n N , 0 < ξ 1 . It holds that
1 ξ 1 + t ξ r + 1 1 t n 1 G t ξ d t
2 r + n 1 + 1 + 2 μ r + n e μ r + n ! < + .
Proof. 
We have that
1 + t ξ r + 1 1 t n 1 d μ ξ t
1 + t ξ r + 1 + 1 1 + t n 1 d μ ξ t
2 1 + t ξ r + 1 1 + t ξ n 1 d μ ξ t =
2 1 + t ξ r + n d μ ξ t
2 · 2 r + n 1 1 + t r + n ξ r + n d μ ξ t =
2 r + n 1 + 1 ξ r + n t r + n d μ ξ t =
( at d μ ξ x = 1 ξ G x ξ d x )
= 2 r + n 1 + 1 ξ r + n 1 ξ x r + n G x ξ d x ( 44 )
2 r + n 1 + 1 ξ r + n 1 + 2 μ r + n e μ r + n ! ξ r + n =
2 r + n 1 + 1 + 2 μ r + n e μ r + n ! < + .
We continue with
Proposition 3.
Let r N . It holds that
1 ξ 1 + t ξ r G t ξ d t
2 r 1 1 + 1 + 2 μ r e μ r ! < + .
Proof. 
We have that
1 + t ξ r d μ ξ t
2 r 1 1 + t r ξ r d μ ξ t =
2 r 1 1 + 1 ξ r t r d μ ξ t
( at d μ ξ x = 1 ξ G x ξ d x )
= 2 r 1 1 + 1 ξ r 1 ξ x r G x ξ d x ( 44 )
2 r 1 1 + 1 ξ r 1 + 2 μ r e μ r ! ξ r =
2 r 1 1 + 1 + 2 μ r e μ r ! < + .
Proposition 4.
Let r N , p > 1 ,   λ : = r p N . Then,
1 ξ 1 + t ξ r p G t ξ d t
2 λ 1 1 + 1 + 2 μ λ e μ λ ! < + .
Proof. 
We have that
1 + t ξ r p d μ ξ t
1 + t ξ r p d μ ξ t
( at d μ ξ x = 1 ξ G x ξ d x )
= 1 ξ 1 + x ξ r p G x ξ d x
( call λ : = r p , λ N )
= 1 ξ 1 + x ξ λ G x ξ d x
( as in the proof of Proposition   3 )
< + .
We continue with the following results.
Theorem 22.
All as in Theorem 20. Then,
1 ξ 1 + t ξ r p + 1 1 t n p 1 M q , λ t ξ d t
2 h 1 + 1 + q + 1 q μ h e μ h ! < + ,
where q , λ > 0 .
Proof. 
Similar to Theorem 20 and (70). □
Theorem 23.
Let r , n N , 0 < ξ 1 . Then,
1 ξ 1 + t ξ r + 1 1 t n 1 M q , λ t ξ d t
2 r + n 1 + 1 + q + 1 q μ r + n e μ r + n ! < + .
Proof. 
Similar to Theorem 21 and (70). □
Proposition 5.
Let r N . It holds that
1 ξ 1 + t ξ r M q , λ t ξ d t
2 r 1 1 + 1 + q + 1 q μ r e μ r ! < + .
Proof. 
Similar to Proposition 3 and (70). □
Proposition 6.
Let r N , p > 1 ,   λ : = r p . Then,
1 ξ 1 + t ξ r p M q , λ t ξ d t
2 λ 1 1 + 1 + q + 1 q μ λ e μ λ ! < + .
Proof. 
Similar to Proposition 4 and (70). □
We continue with more related results.
Theorem 24.
Let p > 1 , r N , 0 < ξ 1 , n N . Then, there exists λ 1 > 0 such that
1 ξ 1 + t ξ r p + 1 1 t n p 1 ψ t ξ d t λ 1 .
Proof. 
Similar to Theorem 20. □
Theorem 25.
Let r , n N , 0 < ξ 1 . Then, there exists λ 2 > 0 such that
1 ξ 1 + t ξ r + 1 1 t n 1 ψ t ξ d t λ 2 .
Proof. 
Similar to Theorem 21. □
Proposition 7.
Let r N . Then,
1 ξ 1 + t ξ r ψ t ξ d t λ 3 R .
Proof. 
As in Proposition 3. □
Proposition 8.
Let r N , p > 1 . Then,
1 ξ 1 + t ξ r p ψ t ξ d t λ 4 R .
Proof. 
As in Proposition 4. □
More needed results:
Theorem 26.
Let p > 1 r N , 0 < ξ 1 n N q , λ > 0 . Then, there exists ρ 1 > 0 :
1 ξ 1 + t ξ r p + 1 1 t n p 1 G q , λ t ξ d t ρ 1 .
Proof. 
Similar to Theorem 22. □
Theorem 27.
Let r , n N , 0 < ξ 1 . Then, there exists ρ 2 > 0 :
1 ξ 1 + t ξ r + 1 1 t n 1 G q , λ t ξ d t ρ 2 .
Proof. 
Similar to Theorem 23. □
Proposition 9.
Let r N . Then,
1 ξ 1 + t ξ r G q , λ t ξ d t ρ 3 R .
Proof. 
As in Proposition 5. □
Proposition 10.
Let r N , p > 1 . Then,
1 ξ 1 + t ξ r p G q , λ t ξ d t ρ 4 R .
Proof. 
As in Proposition 6. □
Furthermore, we have the following.
Theorem 28.
Let p > 1 , r N , 0 < ξ 1 , n N ; q , β > 0 . Then, there exists ψ 1 > 0
1 ξ 1 + t ξ r p + 1 1 t n p 1 Φ q , β t ξ d t ψ 1 .
Proof. 
Similar to Theorem 22. □
Theorem 29.
Let r , n N , 0 < ξ 1 . Then, there exists ψ 2 > 0
1 ξ 1 + t ξ r + 1 1 t n 1 Φ q , β t ξ d t ψ 2 .
Proof. 
Similar to Theorem 23. □
Proposition 11.
Let r N . Then,
1 ξ 1 + t ξ r Φ q , β t ξ d t ψ 3 R .
Proof. 
As in Proposition 5. □
Proposition 12.
Let r N , p > 1 . Then,
1 ξ 1 + t ξ r p Φ q , β t ξ d t ψ 4 R .
Proof. 
As in Proposition 6. □

5. Main Results

Here, we describe the L p , p 1 , approximation properties of the following activated singular integral operators, which are special cases of Θ r , ξ f , x ; see (2). Their definitions are based on Section 3 and Section 4. Basically, we apply our listed results in Section 2.
Definition 1.
Let f : R R be a Borel measurable function, and α j , as in (1), x R , 0 < ξ 1 .
We call
(1)
Θ 1 , r , ξ f , x = 1 ξ j = 1 r α j f x + j t G t ξ d t ,
(2)
Θ 2 , r , ξ f , x = 1 ξ j = 1 r α j f x + j t M q , λ t ξ d t , q , λ > 0 ,
(3)
Θ 3 , r , ξ f , x = 1 ξ j = 1 r α j f x + j t ψ t ξ d t ,
(4)
Θ 4 , r , ξ f , x = 1 ξ j = 1 r α j f x + j t G q , λ t ξ d t , q , λ > 0 ,
and
(5)
Θ 5 , r , ξ f , x = 1 ξ j = 1 r α j f x + j t Φ q , β t ξ d t , q , β > 0 .
We give the following results, grouped by operator.
Theorem 30.
Let p , q > 1 : 1 p + 1 q = 1 , n N .
Call
Δ 1 x : = Θ 1 , r , ξ f , x f x k = 1 n f k x k ! δ k c k , ξ * .
Then,
Δ 1 x p 1 n 1 ! q n 1 + 1 1 q r p + 1 1 p
1 + t ξ r p + 1 1 t n p 1 G t ξ d t 1 p ω r f n , ξ p ,
and Δ 1 x p 0 , as ξ 0 .
Proof. 
By Theorems 1, 5, and 20. □
Theorem 31.
Let f C n R : f n L 1 R , n N . Then,
Δ 1 x 1 1 r + 1 n 1 !
1 + t ξ r + 1 1 t n 1 G t ξ d t ω r f n , ξ 1 ,
and Δ 1 x 1 0 , as ξ 0 .
Proof. 
By Theorems 2, 5 and 21. □
Proposition 13.
Let p , q > 1 : 1 p + 1 q = 1 . Then,
Θ 1 , r , ξ f f p ω r f , ξ p 1 ξ  
1 ξ   1 + t ξ r p G t ξ d t 1 p ,
and Θ 1 , r , ξ I in L p norm, p > 1 , as ξ 0 .
Proof. 
By Propositions 1 and 4, and Theorem 5. □
Proposition 14.
It holds
Θ 1 , r , ξ f f 1 ω r f , ξ 1 1 ξ
1 + t ξ r G t ξ d t ,
and Θ 1 , r , ξ I in L 1 norm, as ξ 0 .
Proof. 
By Propositions 2 and 3, and Theorem 5. □
We continue with the set of results for Θ 2 , r , ξ operator, q , λ > 0 .
Theorem 32.
Let p , q > 1 : 1 p + 1 q = 1 , n N .
Call
Δ 2 x : = Θ 2 , r , ξ f , x f x k = 1 n f k x k ! δ k c ¯ k , ξ .
Then,
Δ 2 x p 1 n 1 ! q n 1 + 1 1 q r p + 1 1 p
1 + t ξ r p + 1 1 t n p 1 M q , λ t ξ d t 1 p ω r f n , ξ p ,
and Δ 2 x p 0 , as ξ 0 .
Proof. 
By Theorems 1, 9, and 22. □
Theorem 33.
Let f C n R : f n L 1 R , n N . Then,
Δ 2 x 1 1 r + 1 n 1 !
1 + t ξ r + 1 1 t n 1 M q , λ t ξ d t ω r f n , ξ 1 ,
and Δ 2 x 1 0 , as ξ 0 .
Proof. 
By Theorems 2, 9, and 23. □
Proposition 15.
Let p , q > 1 : 1 p + 1 q = 1 . Then,
Θ 2 , r , ξ f f p ω r f , ξ p
1 ξ   1 + t ξ r p M q , λ t ξ d t 1 p ,
and Θ 2 , r , ξ I in L p norm, p > 1 , as ξ 0 .
Proof. 
By Propositions 1 and 6, and Theorem 9. □
Proposition 16.
It holds that
Θ 2 , r , ξ f f 1 ω r f , ξ 1 1 ξ
1 + t ξ r M q , λ t ξ d t ,
and Θ 2 , r , ξ I in L 1 norm, as ξ 0 .
Proof. 
By Propositions 2 and 5, and Theorem 9. □
We continue with the set of results for Θ 3 , r , ξ operator.
Theorem 34.
Let p , q > 1 : 1 p + 1 q = 1 , n N .
Call
Δ 3 x : = Θ 3 , r , ξ f , x f x k = 1 n f k x k ! δ k γ k , ξ .
Then,
Δ 3 x p 1 n 1 ! q n 1 + 1 1 q r p + 1 1 p
1 + t ξ r p + 1 1 t n p 1 ψ t ξ d t 1 p ω r f n , ξ p ,
and Δ 3 x p 0 , as ξ 0 .
Proof. 
By Theorems 1, 12, and 24. □
Theorem 35.
Let f C n R : f n L 1 R , n N . Then,
Δ 3 x 1 1 r + 1 n 1 !
1 + t ξ r + 1 1 t n 1 ψ t ξ d t ω r f n , ξ 1 ,
and Δ 3 x 1 0 , as ξ 0 .
Proof. 
By Theorems 2, 12, and 25. □
Proposition 17.
Let p , q > 1 : 1 p + 1 q = 1 . Then,
Θ 3 , r , ξ f f p ω r f , ξ p
1 ξ   1 + t ξ r p ψ t ξ d t 1 p ,
and Θ 3 , r , ξ I in L p norm, p > 1 , as ξ 0 .
Proof. 
By Propositions 1 and 8, and Theorem 12. □
Proposition 18.
It holds that
Θ 3 , r , ξ f f 1 ω r f , ξ 1 1 ξ
1 + t ξ r ψ t ξ d t ,
and Θ 3 , r , ξ I in L 1 norm, as ξ 0 .
Proof. 
By Propositions 2 and 7, and Theorem 12. □
We continue with the set of results for Θ 4 , r , ξ operator, q , λ > 0 .
Theorem 36.
Let p , q > 1 : 1 p + 1 q = 1 , n N .
Call
Δ 4 x : = Θ 4 , r , ξ f , x f x k = 1 n f k x k ! δ k δ ¯ k , ξ .
Then,
Δ 4 x p 1 n 1 ! q n 1 + 1 1 q r p + 1 1 p
1 + t ξ r p + 1 1 t n p 1 G q , λ t ξ d t 1 p ω r f n , ξ p ,
and Δ 4 x p 0 , as ξ 0 .
Proof. 
By Theorems 1, 15, and 26. □
Theorem 37.
Let f C n R : f n L 1 R , n N . Then,
Δ 4 x 1 1 r + 1 n 1 !
1 + t ξ r + 1 1 t n 1 G q , λ t ξ d t ω r f n , ξ 1 ,
and Δ 4 x 1 0 , as ξ 0 .
Proof. 
By Theorems 2, 15, and 27. □
Proposition 19.
Let p , q > 1 : 1 p + 1 q = 1 . Then,
Θ 4 , r , ξ f f p ω r f , ξ p
1 ξ   1 + t ξ r p G q , λ t ξ d t 1 p ,
and Θ 4 r , ξ I in L p norm, p > 1 , as ξ 0 .
Proof. 
By Propositions 1 and 10, and Theorem 15. □
Proposition 20.
It holds that
Θ 4 , r , ξ f f 1 ω r f , ξ 1 1 ξ
1 + t ξ r G q , λ t ξ d t ,
and Θ 4 , r , ξ I in L 1 norm, as ξ 0 .
Proof. 
By Propositions 2 and 9, and Theorem 15. □
We finish with Θ 5 , r , ξ operator results, q , β > 0 .
Theorem 38.
Let p , q > 1 : 1 p + 1 q = 1 , n N .
Call
Δ 5 x : = Θ 5 , r , ξ f , x f x k = 1 n f k x k ! δ k ε k , ξ .
Then,
Δ 5 x p 1 n 1 ! q n 1 + 1 1 q r p + 1 1 p
1 + t ξ r p + 1 1 t n p 1 Φ q , β t ξ d t 1 p ω r f n , ξ p ,
and Δ 5 x p 0 , as ξ 0 .
Proof. 
By Theorems 1, 18, and 28. □
Theorem 39.
Let f C n R : f n L 1 R , n N . Then,
Δ 5 x 1 1 r + 1 n 1 !
1 + t ξ r + 1 1 t n 1 Φ q , β t ξ d t ω r f n , ξ 1 ,
and Δ 5 x 1 0 , as ξ 0 .
Proof. 
By Theorems 2, 18, and 29. □
Proposition 21.
Let p , q > 1 : 1 p + 1 q = 1 . Then,
Θ 5 , r , ξ f f p ω r f , ξ p
1 ξ   1 + t ξ r p Φ q , β t ξ d t 1 p ,
and Θ 5 , r , ξ I in L p norm, p > 1 , as ξ 0 .
Proof. 
By Propositions 1 and 12, and Theorem 18. □
Proposition 22.
It holds that
Θ 5 , r , ξ f f 1 ω r f , ξ 1 1 ξ
1 + t ξ r Φ q , β t ξ d t ,
and Θ 5 , r , ξ I in L 1 norm, as ξ 0 .
Proof. 
By Propositions 2, 11, and Theorem 18.

6. Conclusions

Here, we presented the new idea of going from the neural networks main tools, the activation functions, to singular integrals approximation. That is the rare case of employing applied mathematics to theoretical ones.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Anastassiou, G.A.; Gal, S. Convergence of generalized singular integrals to the unit, univariate case. Math. Inequalities Appl. 2000, 3, 511–518. [Google Scholar] [CrossRef]
  2. Gal, S.G. Remark on the degree of approximation of continuous functions by singular integrals. Math. Nachrichten 1993, 164, 197–199. [Google Scholar] [CrossRef]
  3. Gal, S.G. Degree of approximation of continuous functions by some singular integrals. Rev. Anal. Numér. Théor. Approx. 1998, 27, 251–261. [Google Scholar]
  4. Mohapatra, R.N.; Rodriguez, R.S. On the rate of convergence of singular integrals for Hölder continuous functions. Math. Nachrichten 1990, 149, 117–124. [Google Scholar] [CrossRef]
  5. Anastassiou, G.; Mezei, R. Approximation by Singular Integrals; Cambridge Scientific Publishers: Cambridge, UK, 2012. [Google Scholar]
  6. Anastassiou, G.A. Banach Space Valued Neural Network; Springer: Berlin/Heidelberg, Germany; New York, NY, USA, 2023. [Google Scholar]
  7. Anastassiou, G.A. Parametrized, Deformed and General Neural Networks; Springer: Berlin/Heidelberg, Germany; New York, NY, USA, 2023. [Google Scholar]
  8. Aral, A. On a generalized Gauss Weierstrass singular integral. Fasc. Math. 2005, 35, 23–33. [Google Scholar]
  9. Aral, A. Pointwise approximation by the generalization of Picard and Gauss-Weierstrass singular integrals. J. Concr. Appl. Math. 2008, 6, 327–339. [Google Scholar]
  10. Aral, A. On generalized Picard integral operators. In Advances in Summability and Approximation Theory; Springer: Singapore, 2018; pp. 157–168. [Google Scholar]
  11. Aral, A.; Deniz, E.; Erbay, H. The Picard and Gauss-Weiertrass singular integrals in (p,q)-calculus. Bull. Malays. Math. Sci. Soc. 2020, 43, 1569–1583. [Google Scholar] [CrossRef]
  12. Aral, A.; Gal, S.G. q-generalizations of the Picard and Gauss-Weierstrass singular integrals. Taiwan. J. Math. 2008, 12, 2501–2515. [Google Scholar] [CrossRef]
  13. DeVore, R.A.; Lorentz, G.G. Constructive Approximation; Springer: Berlin/Heidelberg, Germany; New York, NY, USA, 1993; Volume 303. [Google Scholar]
  14. Anastassiou, G.A. Quantitative Uniform Approximation by Activated Singular Operators. Mathematics 2023, 12, 2152. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Anastassiou, G.A. Degree of Lp Approximation Using Activated Singular Integrals. Symmetry 2024, 16, 1022. https://doi.org/10.3390/sym16081022

AMA Style

Anastassiou GA. Degree of Lp Approximation Using Activated Singular Integrals. Symmetry. 2024; 16(8):1022. https://doi.org/10.3390/sym16081022

Chicago/Turabian Style

Anastassiou, George A. 2024. "Degree of Lp Approximation Using Activated Singular Integrals" Symmetry 16, no. 8: 1022. https://doi.org/10.3390/sym16081022

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop