Next Article in Journal
Special Geometric Objects in Generalized Riemannian Spaces
Next Article in Special Issue
Bi-Starlike Function of Complex Order Involving Mathieu-Type Series in the Shell-Shaped Region
Previous Article in Journal
A Reduced-Dimension Weighted Explicit Finite Difference Method Based on the Proper Orthogonal Decomposition Technique for the Space-Fractional Diffusion Equation
Previous Article in Special Issue
Generalized Bounded Turning Functions Connected with Gregory Coefficients
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Smooth Logistic Real and Complex, Ordinary and Fractional Neural Network Approximations over Infinite Domains

by
George A. Anastassiou
Department of Mathematical Sciences, University of Memphis, Memphis, TN 38152, USA
Axioms 2024, 13(7), 462; https://doi.org/10.3390/axioms13070462
Submission received: 6 June 2024 / Revised: 28 June 2024 / Accepted: 1 July 2024 / Published: 9 July 2024
(This article belongs to the Special Issue New Developments in Geometric Function Theory, 3rd Edition)

Abstract

:
In this work, we study the univariate quantitative smooth approximations, including both real and complex and ordinary and fractional approximations, under different functions. The approximators presented here are neural network operators activated by Richard’s curve, a parametrized form of logistic sigmoid function. All domains used are obtained from the whole real line. The neural network operators used here are of the quasi-interpolation type: basic ones, Kantorovich-type ones, and those of the quadrature type. We provide pointwise and uniform approximations with rates. We finish with their applications.

1. Introduction

The author of [1,2], see Section 2, Section 3, Section 4 and Section 5, was the first to establish neural network approximation to continuous functions with rates via very specific neural network operators of Cardaliaguet–Euvrard and “Squashing” types by using the modulus of continuity of the engaged function or its high-order derivative and producing very tight Jackson-type inequalities. He treats both univariate and multivariate cases. These operators’ “bell-shaped” and “squashing” functions are assumed to provide compact support.
The author inspired by [3], continued his studies on neural network approximation by introducing and using the proper quasi-interpolation operators of sigmoidal and hyperbolic tangent types, which resulted in [4,5], treating both the univariate and multivariate cases. He also studied the corresponding fractional cases [5].
A parametrized activation function kills far fewer neurons than the original one.
Therefore, here, the author obtains parametrized Richard’s-curve-activated neural network approximations of differentiated functions, which are transformed from R into R , going beyond the bounded domain functions.
We present real and complex ordinary and fractional quasi-interpolation quantitative approximations. The fractional case is extensively studied because of the applications of fractional calculus in the interpretation of many natural phenomena and in engineering. Therefore, we derive Jackson inequalities that are close to the sharp type.
Real feed-forward neural networks (FNNs) with one hidden layer, the ones we use here, are mathematically expressed by:
N n x = j = 0 n c j σ α j · x + b j , x R s , s N ,
where, for 0 j n , b j R are the thresholds, α j R s are the connection weights, c j R are the coefficients, α j · x is the inner product of α j and x, and σ is the activation function of the network. For more information about neural networks in general, read [6,7,8]. Due to their efficiency, neural network approximations are widely used for many subjects, like differential equations, numerical analysis, statistics, and AI.

2. Preliminaries

The following come from [9], pp. 2–6.
The Richards’s curve is as follows:
φ x = 1 1 + e μ x ; x R , with parameter μ > 0 ,
which is of great interest when 0 < μ 1 .
The function φ x increases in terms of R and is a sigmoid function; specifically, this is a generalized logistic function [10].
It should be noted that
lim x + φ x = 1 and lim x φ x = 0 .
We consider the following activation function:
G x = 1 2 φ x + 1 φ x 1 , x R ,
which is G ( x ) > 0 , all x R .
Function φ has many applications in epidemiology and especially in COVID-19 modeling of infection trajectories [11].
We can see that
φ 0 = 1 2 and φ x = 1 φ x .
We notice that
G x : = 1 2 φ x + 1 φ x 1
= 1 2 1 φ x 1 1 + φ x + 1
= 1 2 φ x + 1 φ x 1 = G x , x R .
Therefore, G is an even function.
We can observe that
G 0 = 1 2 φ 1 φ 1
= 1 2 1 φ 1 φ 1 = 1 2 1 2 φ 1
= 1 2 1 2 1 + e μ = 1 2 1 + e μ 2 1 + e μ = 1 2 e μ 1 e μ + 1 ,
that is
G 0 = e μ 1 2 e μ + 1 .
Let x 0 . We can observe that
G x = 1 2 φ x + 1 φ x 1 =
1 2 1 + e μ x + 1 1 1 + e μ x 1 1 =
1 2 1 1 + e μ x + 1 2 e μ x + 1 μ 1 1 + e μ x 1 2 e μ x 1 μ =
μ 2 1 e μ x + 1 + e μ x + 1 + 2 1 e μ x 1 + e μ x 1 + 2 =
μ 4 1 e μ x + 1 + e μ x + 1 2 + 1 1 e μ x 1 + e μ x 1 2 + 1 =
μ 4 cosh μ x 1 cosh μ x + 1 cosh μ x + 1 + 1 cosh μ x 1 + 1 < 0 , x 1 .
Therefore, for x 1 ,   G ( x ) < 0 and G ( x ) decrease.
Let 0 < x < 1 ; then, 1 x > 0 and 0 < 1 x < 1 + x , and then cosh μ ( x 1 ) = cosh μ ( 1 x ) < cosh μ ( x + 1 ) , so that G ( x ) < 0 and G ( x ) decreases over 0 < x < 1 .
Thus, G ( x ) decreases at 0 , + .
Clearly, G ( x ) increases at , 0 , and G ( 0 ) = 0 .
We can observe that
lim x + G x = 1 2 φ + φ + = 0 , lim x G x = 1 2 φ φ = 0 .
That is, the x axis is the horizontal asymptote for G.
In conclusion, G is a bell symmetric function with the following maximum:
G 0 = e μ 1 2 ( e μ + 1 ) .
We need to use the following theorems.
Theorem 1.
It holds that
i = G x i = 1 , x R .
Remark 1.
Because G is even, it holds that
i = G i x = 1 , x R .
Hence
i = G i + x = 1 , x R ,
and
i = G x + i = 1 , x R .
Theorem 2.
It holds that
G x d x = 1 .
Proof. 
We can observe that
G x d x = j = j j + 1 G x d x = j = 0 1 G x + j d x =
0 1 j = G x + j d x = 0 1 1 d x = 1 .
So, G x is a density function. □
Remark 2.
We can obtain that
G x = 1 2 φ x + 1 φ x 1 , x R .
Let x 1 . That is 0 x 1 < x + 1 . Applying the mean value theorem, we obtain
G x = 1 2 2 φ ξ = φ ξ = μ e μ ξ 1 + e μ ξ 2 ,
where 0 x 1 < ξ < x + 1 .
Notice that
G x < μ e μ ξ < μ e μ x 1 , x 1 .
We need the following definitions.
Definition 1.
In this article, we study the smooth approximation properties of the following quasi-interpolation neural network operators acting on f C B R (continuous and bounded functions):
(i) 
The basic ones:
B n f , x : = k = f k n G n x k , x R , n N ,
(ii) 
The Kantorovich-type operators:
C n f , x : = k = n k n k + 1 n f t d t G n x k , x R , n N ,
(iii) 
let θ N , w r 0 , r = 0 θ w r = 1 , k Z , and
δ n k f : = r = 0 θ w r f k n + r n θ ,
We also consider the quadrature-type operators:
D n f , x : = k = δ n k f G n x k , x R , n N .
We will be using the first modulus of continuity:
ω 1 f , δ : = sup x , y R : x y δ f x f y , δ > 0 ,
where f C R , which is bounded and/or uniformly continuous.
We are motivated by the following result.
Theorem 3
([9], p. 13). Let f C B R , 0 < β < 1 , μ > 0 , n N : n 1 β > 2 , x R . Then,
(i) 
B n f , x f x ω 1 f , 1 n β + 4 f e μ n 1 β 2 = : γ ,
and
(ii) 
B n f f γ .
For f C u B R (uniformly and bounded continuous functions), we can obtain lim n B n f = f , both pointwise and uniformly.

3. Main Results

Here, we study the approximation properties of neural network operators B n , C n , D n under differentiation.
Theorem 4.
Here, 0 < β < 1 , n N : n 1 β > 2 , μ > 0 ,   N N ,   f C N R , with f i C B R , i = 0 , 1 , , N ;   x R . Then,
(i) 
B n f , x f x j = 1 N f j x j ! B n · x j x
ω 1 f N , 1 n β n β N N ! + f N e μ 2 N + 3 n N μ N e μ 2 n 1 β 1 = : M ,
(ii) 
Assume all f j x = 0 , j = 1 , , N ; we have that
B n f , x f x M ,
at high speed n β N + 1 ,
(iii) 
B n f , x f x j = 1 N f j x j ! 1 n β j + 1 n j e μ 2 j + 2 j ! μ j e μ 2 n 1 β 1 + M ,
and
(iv) 
B n f f j = 1 N f j j ! 1 n β j + 1 n j e μ 2 j + 2 j ! μ j e μ 2 n 1 β 1 + M .
Proof. 
Using Taylor’s theorem, we have ( x R )
f k n = j = 0 N f j x j ! k n x j + x k n f N t f N x k n t N 1 N 1 ! d t .
It follows that
f k n G n x k = j = 0 N f j x j ! G n x k k n x j +
G n x k x k n f N t f N x k n t N 1 N 1 ! d t .
Hence,
B n f , x = k = f k n G n x k = j = 0 N f j x j ! B n · x j x +
k = G n x k x k n f N t f N x k n t N 1 N 1 ! d t .
Use the following equation:
R : = k = G n x k x k n f N t f N x k n t N 1 N 1 ! d t .
(I) Let k n x < 1 n β . Then,
(i) case k n x :
R k = : k n x < 1 n β G n x k x k n f N t f N x k n t N 1 N 1 ! d t
k = : k n x < 1 n β G n x k ω 1 f N , 1 n β k n x N N !
ω 1 f N , 1 n β 1 N ! n β N .
We found that
R | k n x < 1 n β ω 1 f N , 1 n β 1 N ! n β N .
(ii) Case k n < x : then,
R k = : k n x < 1 n β G n x k k n x f N t f N x k n t N 1 N 1 ! d t
k = : k n x < 1 n β G n x k k n x f N x f N t t k n N 1 N 1 ! d t
k = : k n x < 1 n β G n x k ω 1 f N , 1 n β x k n N N !
ω 1 f N , 1 n β 1 N ! n β N .
Consequently, we prove that
R | k n x < 1 n β ω 1 f N , 1 n β N ! n β N .
Next, we can observe ( k n x )
R k = : k n x 1 n β G n x k x k n f N t f N x k n t N 1 N 1 ! d t
k = : k n x 1 n β G n x k x k n f N t f N x k n t N 1 N 1 ! d t
2 f N k = : k n x 1 n β G n x k k n x N N !
= 2 f N n N N ! k = : n x k n 1 β G n x k k n x N .
In case k n < x , we obtain
R k = : k n x 1 n β G n x k k n x f N t f N x t k n N 1 N 1 ! d t
k = : n x k n 1 β G n x k k n x f N x f N t t k n N 1 N 1 ! d t
2 f N k = : n x k n 1 β G n x k x k n N N !
= 2 f N n N N ! k = : n x k n 1 β G n x k n x k N .
Consequently, it holds
R | k n x 1 n β 2 f N n N N ! k = : n x k n 1 β G n x k n x k N .
Next, we treat
k = : n x k n 1 β G n x k n x k N ( 13 )
μ k = : n x k n 1 β e μ n x k 1 n x k N =
μ e μ k = : n x k n 1 β e μ n x k n x k N = : .
Notice that
e μ n x k 2 = λ = 0 μ n x k 2 λ λ ! μ n x k 2 N 1 N ! .
Therefore, we have
μ n x k 2 N N ! e μ n x k 2 , or
μ n x k N 2 N N ! e μ n x k 2 , or
n x k N 2 N μ N N ! e μ n x k 2 .
Hence, it holds that
μ μ N e μ 2 N N ! k = : n x k n 1 β e μ n x k e μ n x k 2 =
e μ μ N 1 2 N N ! k = : n x k n 1 β e μ 2 n x k
2 e μ μ N 1 2 N N ! n 1 β 1 e μ 2 x d x =
2 μ 2 e μ μ N 1 2 N N ! n 1 β 1 e μ 2 x d μ x 2 =
4 e μ μ N 2 N N ! n 1 β 1 e y d y = e μ 2 N + 2 N ! μ N e y | n 1 β 1 =
e μ 2 N + 2 N ! μ N e y | n 1 β 1 = e μ 2 N + 2 N ! μ N e μ x 2 | n 1 β 1 =
e μ 2 N + 2 N ! μ N e μ 2 n 1 β 1 .
Thus, we have
k = : n x k n 1 β G n x k n x k N e μ 2 N + 2 N ! μ N e μ 2 n 1 β 1 .
and the following holds:
R | k n x 1 n β 2 f N N ! n N e μ 2 N + 2 N ! μ N e μ 2 n 1 β 1 =
f N n N μ N e μ 2 N + 3 e μ 2 n 1 β 1 .
We prove that
R | k n x 1 n β f N n N e μ 2 N + 3 μ N e μ 2 n 1 β 1 .
and we derive that
R R | k n x < 1 n β + R | k n x 1 n β
ω 1 f N , 1 n β n β N N ! + f N n N e μ 2 N + 3 μ N e μ 2 n 1 β 1 .
Finally, we estimate
B n · x j x k = G n x k k n x j =
k = : x k n < 1 n β G n x k k n x j +
k = : x k n 1 n β G n x k k n x j
1 n β j + 1 n j k = : n x k n 1 β G n x k n x k j
1 n β j + 1 n j e μ 2 j + 2 j ! μ j e μ 2 n 1 β 1 .
The theorem is proved. □
Next comes
Theorem 5.
Here 0 < β < 1 , n N : n 1 β > 2 , μ > 0 ,   N N ,   f C N R , with f i C B R , i = 0 , 1 , , N ;   x R . Then,
(i)
C n f , x f x j = 1 N f j x j ! C n · x j x
ω 1 f N , 1 n + 1 n β 1 n + 1 n β N N ! +
2 N f N n N N ! 2 e μ e μ n 1 β 1 + e μ 2 N + 2 N ! μ N e μ 2 n 1 β 1 = : Ψ ,
(ii) assume all f j x = 0 , j = 1 , , N , then
C n f , x f x Ψ ,
at high speed n β N + 1 ,
(iii)
C n f , x f x j = 1 N f j x j !
1 n + 1 n β j + 2 j 1 n j 2 e μ e μ n 1 β 1 + e μ 2 j + 2 j ! μ j e μ 2 n 1 β 1 + Ψ ,
and
(iv)
C n f f j = 1 N f j j !
1 n + 1 n β j + 2 j 1 n j 2 e μ e μ n 1 β 1 + e μ 2 j + 2 j ! μ j e μ 2 n 1 β 1 + Ψ .
Proof. 
One can write
C n f , x = k = n 0 1 n f t + k n d t G n x k .
Let now f C N R with f i C B R , i = 0 , 1 , , N N .
We have that
f t + k n = j = 0 N f j x j ! t + k n x j +
x t + k n f N s f N x t + k n s N 1 N 1 ! d s ,
and
n 0 1 n f t + k n d t = j = 0 N f j x j ! n 0 1 n t + k n x j d t +
n 0 1 n x t + k n f N s f N x t + k n s N 1 N 1 ! d s d t .
Hence,
C n f , x = k = n 0 1 n f t + k n d t G n x k =
j = 0 N f j x j ! C n · x j x +
k = G n x k n 0 1 n x t + k n f N s f N x t + k n s N 1 N 1 ! d s d t .
Therefore, we can write
C n f , x f x j = 1 N f j x j ! C n · x j x = R ,
where
R : = k = G n x k n 0 1 n x t + k n f N s f N x t + k n s N 1 N 1 ! d s d t .
Call
λ k : = n 0 1 n x t + k n f N s f N x t + k n s N 1 N 1 ! d s d t ,
where k Z .
(I) Let k n x < 1 n β ( 0 < β < 1 ).
(i) if t + k n x , then
λ k n 0 1 n x t + k n f N s f N x t + k n s N 1 N 1 ! d s d t
n 0 1 n ω 1 f N , t + k n x t + k n x N N ! d t
n 0 1 n ω 1 f N , t + k n x t + k n x N N ! d t
ω 1 f N , 1 n + 1 n β 1 n + 1 n β N N ! .
(ii) if t + k n < x , then
λ k n 0 1 n t + k n x f N s f N x t + k n s N 1 N 1 ! d s d t
n 0 1 n t + k n x f N x f N s s t + k n N 1 N 1 ! d s d t
n 0 1 n ω 1 f N , x k n t x t k n N N ! d t
n 0 1 n ω 1 f N , k n x + t x k n + t N N ! d t
ω 1 f N , 1 n β + 1 n 1 n β + 1 n N N ! .
Therefore, when k n x < 1 n β , then
λ k ω 1 f N , 1 n β + 1 n 1 n β + 1 n N N ! .
Clearly, now the following holds:
R | k n x < 1 n β ω 1 f N , 1 n + 1 n β 1 n + 1 n β N N ! .
(II) Let k n x 1 n β .
(i) if t + k n x , then
λ k 2 f N n 0 1 n t + k n x N N ! d t
2 f N n 0 1 n k n x + t N N ! d t
2 f N k n x + 1 n N N ! .
(ii) if t + k n < x , then
λ k n 0 1 n t + k n x f N x f N s s t + k n N 1 N 1 ! d s d t
2 f N n 0 1 n x t k n N N ! d t
2 f N n 0 1 n k n x + t N N ! d t 2 f N k n x + 1 n N N ! .
Hence, when k n x 1 n β , then
λ k 2 f N n N N ! n x k + 1 N
2 N f N n N N ! 1 + n x k N , k Z .
Clearly, then
R | k n x 1 n β k = : n x k n 1 β G n x k λ k
2 N f N n N N ! k = : n x k n 1 β G n x k 1 + n x k N
2 N f N n N N ! k = : n x k n 1 β G n x k +
k = : n x k n 1 β G n x k n x k N
(by [9], p. 6, Theorem 1.4)
2 N f N n N N ! 2 e μ n 1 β 2 + e μ 2 N + 2 N ! μ N e μ 2 n 1 β 1 .
We have found that
R | k n x 1 n β 2 N f N n N N ! 2 e μ e μ n 1 β 1 + e μ 2 N + 2 N ! μ N e μ 2 n 1 β 1 .
Therefore, the following holds:
R R | k n x < 1 n β + R | k n x 1 n β
ω 1 f N , 1 n + 1 n β 1 n + 1 n β N N ! +
2 N f N n N N ! 2 e μ e μ n 1 β 1 + e μ 2 N + 2 N ! μ N e μ 2 n 1 β 1 .
Finally, we estimate
C n · x j x k = G n x k n 0 1 n t + k n x j d t
k = G n x k n 0 1 n k n x + t j d t
k = G n x k k n x + 1 n j =
k = : k n x < 1 n β G n x k k n x + 1 n j +
k = : x k n 1 n β G n x k k n x + 1 n j
1 n β + 1 n j + 1 n j k = : n x k n 1 β G n x k n x k + 1 j
1 n β + 1 n j + 2 j 1 n j k = : n x k n 1 β G n x k 1 + n x k j
(… as earlier)
1 n + 1 n β j + 2 j 1 n j 2 e μ e μ n 1 β 1 + e μ 2 j + 2 j ! μ j e μ 2 n 1 β 1 .
Therefore,
C n · x j x 1 n + 1 n β j +
2 j 1 n j 2 e μ e μ n 1 β 1 + e μ 2 j + 2 j ! μ j e μ 2 n 1 β 1 .
The theorem is proved. □
Therefore, the following theorem holds.
Theorem 6.
Here, 0 < β < 1 , n N : n 1 β > 2 , μ > 0 ,   N N ,   f C N R , with f i C B R , i = 0 , 1 , , N ;   x R . Then,
(i)
D n f , x f x j = 1 N f j x j ! D n · x j x
ω 1 f N , 1 n + 1 n β 1 n + 1 n β N N ! +
2 N f N n N N ! 2 e μ e μ n 1 β 1 + e μ 2 N + 2 N ! μ N e μ 2 n 1 β 1 = : Ψ ,
(ii) Assume all f j x = 0 , j = 1 , , N ; then,
D n f , x f x Ψ ,
at high speed n β N + 1 ,
(iii)
D n f , x f x j = 1 N f j x j !
1 n + 1 n β j + 2 j 1 n j 2 e μ e μ n 1 β 1 + e μ 2 j + 2 j ! μ j e μ 2 n 1 β 1 + Ψ ,
(iv)
D n f f j = 1 N f j j !
1 n + 1 n β j + 2 j 1 n j 2 e μ e μ n 1 β 1 + e μ 2 j + 2 j ! μ j e μ 2 n 1 β 1 + Ψ .
Proof. 
We have that
f k n + i n r = j = 0 N f j x j ! k n + i n r x j +
x k n + i n r f N t f N x k n + i n r t N 1 N 1 ! d t ,
and
i = 1 r w i f k n + i n r = j = 0 N f j x j ! i = 1 r w i k n + i n r x j +
i = 1 r w i x k n + i n r f N t f N x k n + i n r t N 1 N 1 ! d t .
Furthermore, the following holds:
D n f , x f x j = 1 N f j x j ! D n · x j x = R x ,
where
R x = k = G n x k i = 1 r w i x k n + i n r f N t f N x k n + i n r t N 1 N 1 ! d t .
Use the following equation:
γ k : = i = 1 r w i x k n + i n r f N t f N x k n + i n r t N 1 N 1 ! d t .
(I) Let k n x < 1 n β .
(i) if k n + i n r x ; then,
γ k i = 1 r w i ω 1 f N , x k n + i n r k n x + i n r N N !
ω 1 f N , 1 n β + 1 n 1 n β + 1 n N N ! .
(ii) If k n + i n r < x , then
γ k i = 1 r w i x k n + i n r f N t f N x k n + i n r t N 1 N 1 ! d t
i = 1 r w i k n + i n r x f N x f N t t k n + i n r N 1 N 1 ! d t
i = 1 r w i ω 1 f N , x k n + i n r x k n + i n r N N !
ω 1 f N , 1 n β + 1 n 1 n β + 1 n N N ! .
Therefore, when k n x < 1 n β , then
γ k ω 1 f N , 1 n β + 1 n 1 n β + 1 n N N ! .
Clearly, now the following holds:
R | k n x < 1 n β ω 1 f N , 1 n + 1 n β 1 n + 1 n β N N ! .
(II) Let k n x 1 n β .
(i) if k n + i n r x , then
γ k i = 1 r w i 2 f N k n + i n r x N N !
2 f N k n x + 1 n N N ! .
(ii) if k n + i n r < x , then
γ k i = 1 r w i k n + i n r x f N x f N t t k n + i n r N 1 N 1 ! d t
2 f N i = 1 r w i x k n + i n r N N !
2 f N x k n + 1 n N N ! .
So, in general, we obtain
γ k 2 f N x k n + 1 n N N !
= 2 f N n N n x k + 1 N N !
2 N f N n N N ! 1 + n x k N , k Z .
Clearly, then
R x | k n x 1 n β k = : n x k n 1 β G n x k γ k
2 N f N n N N ! k = : n x k n 1 β G n x k 1 + n x k N
as earlier
2 N f N n N N ! 2 e μ e μ n 1 β 1 + e μ 2 N + 2 N ! μ N e μ 2 n 1 β 1 .
Therefore, the following holds:
R x R x | k n x < 1 n β + R x | k n x 1 n β
ω 1 f N , 1 n + 1 n β 1 n + 1 n β N N ! +
2 N f N n N N ! 2 e μ e μ n 1 β 1 + e μ 2 N + 2 N ! μ N e μ 2 n 1 β 1 .
Next, we estimate
D n · x j x k = G n x k i = 1 r w i k n x + i n r j
k = G n x k k n x + 1 n j as earlier
1 n + 1 n β j + 2 j 1 n j 2 e μ e μ n 1 β 1 + e μ 2 j + 2 j ! μ j e μ 2 n 1 β 1 .
The theorem is proved. □
We need the following definition.
Definition 2.
A function f : R R is absolutely continuous over R , if f | a , b is absolutely continuous, for every a , b R . We can write f A C n R , if f n 1 A C R (absolutely continuous functions over R ) and n N .
Definition 3.
Let ν 0 , n = ν ( · is the ceiling of the number); f A C n R . We use the left Caputo fractional derivative ([12,13,14], pp. 49–52) as the following function:
D a ν f x = 1 Γ n ν a x x t n ν 1 f n t d t ,
x [ a , ) , a R ; where Γ is the gamma function.
Note that D a ν f L 1 a , b and D a ν f exists, i.e., on a , b , a , b R .
We set D a 0 f x = f x , x [ a , ) .
Lemma 1
(See also [15]). Let ν > 0 , ν N , n = ν , f C n 1 R and f n L R . Then, D a ν f a = 0 for any a R .
Definition 4
(See also [13,16,17]). Let f A C m R , m = α , α > 0 . The right Caputo fractional derivative of order α > 0 is given by
D b α f x = 1 m Γ m α x b z x m α 1 f m z d z ,
x ( , b ] , b R . We set D b 0 f x = f x .
Note that D b α f L 1 a , b and D b α f exists a.e. on a , b , a , b R .
Lemma 2
(see also [15]). Let f C m 1 R , f m L R , m = α ,   α > 0 . Then, D b α f b = 0 , for any b R .
Convention 1.
We assume that
D x 0 α f x = 0 , for x < x 0 , and D x 0 α f x = 0 , for x > x 0 .
Proposition 2
(See also [15]). Let f C n R , n = ν , ν > 0 . Then D a ν f x is continuous in x [ a , ) , a R .
Also we have
Proposition 3
(see also [15]). Let f C m R , m = α , α > 0 . Then, D b α f x is continuous in x ( , b ] , b R .
We further mention
Proposition 4
(see also [15]). Let f C m 1 R , f m L R , m = α , α > 0 and let x , x 0 R : x x 0 . Then, D x 0 α f x is continuous in x 0 .
Proposition 5
(See also [15]). Let f C m 1 R , f m L R , m = α , α > 0 and let x , x 0 R : x x 0 . Then, D x 0 α f x is continuous in x 0 .
Proposition 6
(See also [15]). Let f C m R , m = α , α > 0 ; x , x 0 R . Then, D x 0 α f x ,   D x 0 α f x are jointly continuous functions in x , x 0 from R 2 R .
Fractional results follow.
Theorem 7.
Let α > 0 , N = α , α N , μ > 0 , f A C N R , f N L R , 0 < β < 1 , x R , n N : n 1 β 3 . Assume also that both sup x R D x α f ,   sup x R D x α f < . Then,
(I)
B n f , x f x j = 1 N 1 f j x j ! B n · x j x
1 n α β Γ α + 1 ω 1 D x α f , 1 n β ( , x ] + ω 1 D x α f , 1 n β [ x , ) +
1 n α Γ α + 1 e μ 2 N + 2 N ! μ N e μ 2 n 1 β 1 D x α f , ( , x ] + D x α f , [ x , ) = : θ ,
(II) given f j x = 0 , j = 1 , , N 1 , we have
B n f , x f x θ ,
(III)
B n f , x f x j = 1 N 1 f j x j !
1 n β j + 1 n j e μ 2 j + 2 j ! μ j e μ 2 n 1 β 1 + θ ,
and
(IV) Adding f j < , j = 1 , , N 1 , we obtain
B n f f j = 1 N 1 f j j ! 1 n β j + 1 n j e μ 2 j + 2 j ! μ j e μ 2 n 1 β 1 +
1 n α β Γ α + 1 sup x R ω 1 D x α f , 1 n β ( , x ] + sup x R ω 1 D x α f , 1 n β [ x , ) +
1 n α Γ α + 1 e μ 2 N + 2 N ! μ N e μ 2 n 1 β 1
sup x R D x α f , ( , x ] + sup x R D x α f , [ x , ) .
As shown above, when N = 1 , the sum j = 1 N 1 · = 0 .
As we can see here, we obtain fractional pointwise and uniform convergence with rates of B n I , with the unit operator as n .
Proof. 
Let x R . We have that D x α f x = D x α f x = 0 .
From [12], p. 54, we can derive the left Caputo fractional Taylor’s formula:
f k n = j = 0 N 1 f j x j ! k n x j +
1 Γ α x k n k n J α 1 D x α f J D x α f x d J ,
for all x k n < .
Also, from [16], using the right Caputo fractional Taylor’s formula, we can obtain the following:
f k n = j = 0 N 1 f j x j ! k n x j +
1 Γ α k n x J k n α 1 D x α f J D x α f x d J ,
for all < k n x .
Hence, the following holds:
f k n G n x k = j = 0 N 1 f j x j ! G n x k k n x j +
G n x k Γ α x k n k n J α 1 D x α f J D x α f x d J ,
for all x k n < , iff n x k < ,
and
f k n G n x k = j = 0 N 1 f j x j ! G n x k k n x j +
G n x k Γ α k n x J k n α 1 D x α f J D x α f x d J ,
for all < k n x , iff < k n x .
We have that n x n x + 1 .
Therefore, the following holds:
k = n x + 1 f k n G n x k = j = 0 N 1 f j x j ! k = n x + 1 G n x k k n x j +
k = n x + 1 G n x k Γ α x k n k n J α 1 D x α f J D x α f x d J ,
and
k = n x f k n G n x k = j = 0 N 1 f j x j ! k = n x G n x k k n x j +
k = n x G n x k Γ α k n x J k n α 1 D x α f J D x α f x d J .
Adding the last two equalities (100), (101), we can obtain:
B n f , x f x j = 1 N 1 f j x j ! B n · x j x = R n x ,
where
R n x : = R 1 n x + R 2 n x ,
with
R 1 n x : = k = n x + 1 G n x k Γ α x k n k n J α 1 D x α f J D x α f x d J ,
and
R 2 n x : = k = n x G n x k Γ α k n x J k n α 1 D x α f J D x α f x d J .
Furthermore, let
δ 1 n x : = 1 Γ α x k n k n J α 1 D x α f J D x α f x d J ,
for k = n x + 1 , , ,
and
δ 2 n x : = 1 Γ α k n x J k n α 1 D x α f J D x α f x d J ,
for k = , , n x .
Let k n x < 1 n β ; we derive the following:
δ 1 n x ω 1 D x α f , 1 n β [ x , ) 1 n α β Γ α + 1 ,
k = n x + 1 , , ,
and
δ 2 n x ω 1 D x α f , 1 n β ( , x ] 1 n α β Γ α + 1 ,
k = , , n x .
Also, we obtain the following:
δ 1 n x D x α f , [ x , ) k n x α Γ α + 1 , k = n x + 1 , , ,
and
δ 2 n x D x α f , ( , x ] x k n α Γ α + 1 , k = , , n x .
Therefore, the following holds:
R 1 n x | x k n < 1 n β ω 1 D x α f , 1 n β [ x , ) n α β Γ α + 1 ,
k = n x + 1 , , ,
and
R 2 n x | x k n < 1 n β ω 1 D x α f , 1 n β ( , x ] n α β Γ α + 1 ,
k = , , n x .
Next, we estimate
R 1 n x | x k n 1 n β
D x α f , [ x , ) Γ α + 1 k = n x + 1 : x k n 1 n β G n x k k n x α
(by N = α )
D x α f , [ x , ) n α Γ α + 1 k = : n x k n 1 β G n x k n x k N
D x α f , [ x , ) n α Γ α + 1 μ e μ k = : n x k n 1 β e μ n x k n x k N as earlier
D x α f , [ x , ) n α Γ α + 1 e μ 2 N + 2 N ! μ N e μ 2 n 1 β 1 .
Hence, the following holds:
R 1 n x | x k n 1 n β
D x α f , [ x , ) n α Γ α + 1 e μ 2 N + 2 N ! μ N e μ 2 n 1 β 1 .
Furthermore, we can obtain:
R 2 n x | x k n 1 n β
D x α f , ( , x ] ) Γ α + 1 k = : x k n 1 n β n x G n x k x k n α
D x α f , ( , x ] ) n α Γ α + 1 k = : n x k n 1 β G n x k n x k α
D x α f , ( , x ] ) n α Γ α + 1 k = : n x k n 1 β G n x k n x k N as earlier
D x α f , ( , x ] ) n α Γ α + 1 e μ 2 N + 2 N ! μ N e μ 2 n 1 β 1 .
That is
R 2 n x | x k n 1 n β
D x α f , ( , x ] ) n α Γ α + 1 e μ 2 N + 2 N ! μ N e μ 2 n 1 β 1 .
We have proved that
R n x 1 n α β Γ α + 1 ω 1 D x α f , 1 n β ( , x ] + ω 1 D x α f , 1 n β [ x , ) +
1 n α Γ α + 1 e μ 2 N + 2 N ! μ N e μ 2 n 1 β 1 D x α f , ( , x ] + D x α f , [ x , ) .
As was shown earlier, we can obtain that
B n · x j x 1 n β j + 1 n j e μ 2 j + 2 j ! μ j e μ 2 n 1 β 1 .
We have that
D x α f s = 1 Γ N α x s s t N α 1 f N t d t , ( s x )
and
D x α f s = 1 N Γ N α s x t s N α 1 f N t d t , ( s x )
x , s R .
Therefore, the following holds:
D x α f s 1 Γ N α f N s x N α N α = f N s x N α Γ N α + 1 ,
s x ,
and
D x α f s f N Γ N α x s N α N α = f N Γ N α + 1 x s N α ,
s x .
Thus, it is reasonable to assume that sup x R D x α f ,   sup x R D x α f < .
Consequently, the following holds:
sup x R ω 1 D x α f , 1 n β ( , x ] , sup x R ω 1 D x α f , 1 n β [ x , ) < .
The theorem is now proved. □

4. Applications for N = 1

We obtain the following results:
Corollary 1.
Here, 0 < β < 1 , n N : n 1 β > 2 , μ > 0 , f C 1 R , with f , f C B R ; x R . Then,
(I)
B n f , x f x f x B n . x x
ω 1 f , 1 n β n β + 16 f e μ n μ e μ 2 n 1 β 1 = : M 1 ,
(II) assume f x = 0 ; we have that
B n f , x f x M 1 ,
at high speed n 2 β ,
(III)
B n f , x f x f x 1 n β + 1 n 8 e μ μ e μ 2 n 1 β 1 + M 1 ,
(IV)
B n f f f 1 n β + 1 n 8 e μ μ e μ 2 n 1 β 1 + M 1 .
Proof. 
Use Theorem 4 for N = 1 .
Corollary 2.
Here, 0 < β < 1 , n N : n 1 β > 2 , μ > 0 , f C 1 R , with f , f C B R ; x R . Then,
(I)
C n f , x f x f x C n · x x , D n f , x f x f x D n · x x
ω 1 f , 1 n + 1 n β 1 n + 1 n β +
2 f n 2 e μ e μ n 1 β 1 + 8 e μ μ e μ 2 n 1 β 1 = : ψ 1 ,
(II) assume f x = 0 ; we have that
C n f , x f x , D n f , x f x ψ 1 ,
at high speed n 2 β ,
(III)
C n f , x f x , D n f , x f x
f x 1 n + 1 n β + 1 n 2 e μ e μ n 1 β 1 + 8 e μ μ e μ 2 n 1 β 1 + ψ 1 ,
and
(IV)
C n f f , D n f f
f 1 n + 1 n β + 1 n 2 e μ e μ n 1 β 1 + 8 e μ μ e μ 2 n 1 β 1 + ψ 1 .
Proof. 
Use Theorems 5 and 6 for N = 1 .
Corollary 3.
Let 0 < α < 1 , μ > 0 , f A C 1 R , f L R , 0 < β < 1 , x R , n N : n 1 β 3 . Assume also that sup x R D x α f ,   sup x R D x α f < . Then,
(I)
B n f , x f x
1 n α β Γ α + 1 ω 1 D x α f , 1 n β ( , x ] + ω 1 D x α f , 1 n β [ x , ) +
1 n α Γ α + 1 8 e μ μ e μ 2 n 1 β 1 D x α f , ( , x ] + D x α f , [ x , ) ,
and
(II)
B n f f
1 n α β Γ α + 1 sup x R ω 1 D x α f , 1 n β ( , x ] + sup x R ω 1 D x α f , 1 n β [ x , ) +
1 n α Γ α + 1 8 e μ μ e μ 2 n 1 β 1 sup x R D x α f , ( , x ] + sup x R D x α f , [ x , ) .
Proof. 
Use Theorem 7 for N = 1 .
Next is the case of α = 1 2 .
Corollary 4.
Let μ > 0 , f A C 1 R , f L R , 0 < β < 1 , x R , n N : n 1 β 3 . Assume that sup x R D x 1 2 f ,   sup x R D x 1 2 f < . Then,
(I)
B n f , x f x
2 n β 2 π ω 1 D x 1 2 f , 1 n β ( , x ] + ω 1 D x 1 2 f , 1 n β [ x , ) +
1 π n 16 e μ μ e μ 2 n 1 β 1 D x 1 2 f , ( , x ] + D x 1 2 f , [ x , ) ,
and
(II)
B n f f
2 n β 2 π sup x R ω 1 D x 1 2 f , 1 n β ( , x ] + sup x R ω 1 D x 1 2 f , 1 n β [ x , ) +
1 π n 16 e μ μ e μ 2 n 1 β 1 sup x R D x 1 2 f , ( , x ] + sup x R D x 1 2 f , [ x , ) .
Proof. 
Use Corollary 3. □

5. Complex Neural Network Approximation

Remark 3.
Let f : R C , with real and imaginary parts f 1 , f 2 : f = f 1 + i f 2 , i = 1 . Clearly, f is continuous if f 1 and f 2 are continuous.
Also,
f j x = f 1 j x + i f 2 j x ,
holds for all j = 1 , , N , given that f 1 , f 2 C N R , N N .
Here, the following are defined:
B n f , x : = B n f 1 , x + i B n f 2 , x ; C n f , x : = C n f 1 , x + i C n f 2 , x ; D n f , x : = D n f 1 , x + i D n f 2 , x .
We observe here that
B n f , x f x B n f 1 , x f 1 x + B n f 2 , x f 2 x ,
and
B n f f B n f 1 f 1 + B n f 2 f 2 ;
C n f , x f x C n f 1 , x f 1 x + C n f 2 , x f 2 x ,
C n f f C n f 1 f 1 + C n f 2 f 2 ;
and
D n f , x f x D n f 1 , x f 1 x + D n f 2 , x f 2 x ,
D n f f D n f 1 f 1 + D n f 2 f 2 .
Using C B R , C , we can denote the space of continuous and bounded functions f : R C . Clearly, f is bounded if both f 1 , f 2 are bounded from R into R , where f = f 1 + i f 2 .
Theorem 8.
Let f : R C , such that f = f 1 + i f 2 . Assume f 1 , f 2 C N R , N R , with f 1 i , f 2 i C B R , i = 0 , 1 , , N ;   x R . Here, 0 < β < 1 , n N : n 1 β > 2 , μ > 0 . Then,
(I)
B n f , x f x j = 1 N f 1 j x + f 2 j x j !
1 n β j + 1 n j e μ 2 j + 2 j ! μ j e μ 2 n 1 β 1 +
ω 1 f 1 N , 1 n β + ω 1 f 2 N , 1 n β n β N N ! +
f 1 N + f 2 N e μ 2 N + 3 e μ 2 n 1 β 1 n N μ N ,
(II) Given that f 1 j x = f 2 j x = 0 , j = 1 , , N , we have
B n f , x f x ω 1 f 1 N , 1 n β + ω 1 f 2 N , 1 n β n β N N ! +
f 1 N + f 2 N e μ 2 N + 3 e μ 2 n 1 β 1 n N μ N ,
(III)
B n f f j = 1 N f 1 j + f 2 j j !
1 n β j + 1 n j e μ 2 j + 2 j ! μ j e μ 2 n 1 β 1 +
ω 1 f 1 N , 1 n β + ω 1 f 2 N , 1 n β n β N N ! +
f 1 N + f 2 N e μ 2 N + 3 e μ 2 n 1 β 1 n N μ N .
Proof. 
By Theorem 4. □
Theorem 9.
All are presented as in Theorem 8. Then,
(I)
C n f , x f x , D n f , x f x j = 1 N f 1 j x + f 2 j x j !
1 n + 1 n β j + 2 j 1 n j 2 e μ e μ n 1 β 1 + e μ 2 j + 2 j ! μ j e μ 2 n 1 β 1 +
ω 1 f 1 N , 1 n + 1 n β + ω 1 f 2 N , 1 n + 1 n β 1 n + 1 n β N N ! +
2 N f 1 N + f 2 N n N μ N 2 e μ e μ n 1 β 1 + e μ 2 N + 2 N ! μ N e μ 2 n 1 β 1 ,
(II) Given that f 1 j x = f 2 j x = 0 , j = 1 , , N , we have
C n f , x f x , D n f , x f x
ω 1 f 1 N , 1 n + 1 n β + ω 1 f 2 N , 1 n + 1 n β 1 n + 1 n β N N ! +
2 N f 1 N + f 2 N n N N ! 2 e μ e μ n 1 β 1 + e μ 2 N + 2 N ! μ N e μ 2 n 1 β 1 ,
(III)
C n f f , D n f f j = 1 N f 1 j + f 2 j j !
1 n + 1 n β j + 2 j 1 n j 2 e μ e μ n 1 β 1 + e μ 2 j + 2 j ! μ j e μ 2 n 1 β 1 +
ω 1 f 1 N , 1 n + 1 n β + ω 1 f 2 N , 1 n + 1 n β 1 n + 1 n β N N ! +
2 N f 1 N + f 2 N n N N ! 2 e μ e μ n 1 β 1 + e μ 2 N + 2 N ! μ N e μ 2 n 1 β 1 .
Proof. 
Use Theorems 5 and 6. □
We finish with the following fractional result.
Theorem 10.
Let f : R C , such that f = f 1 + i f 2 . Assume f 1 , f 2 A C N R , f 1 N , f 2 N L R , N N . Here α > 0 , N = α , α N , μ > 0 , 0 < β < 1 , x R , n N : n 1 β 3 . Suppose also that sup x R D x α f 1 ,   sup x R D x α f 2 ,   sup x R D x α f 1 ,   sup x R D x α f 2 < . Then,
(I)
B n f , x f x j = 1 N 1 f 1 j x + f 2 j x j !
1 n β j + 1 n j e μ 2 j + 2 j ! μ j e μ 2 n 1 β 1 +
1 n α β Γ α + 1 ω 1 D x α f 1 , 1 n β ( , x ] + ω 1 D x α f 2 , 1 n β ( , x ] +
ω 1 D x α f 1 , 1 n β [ x , ) + ω 1 D x α f 2 , 1 n β [ x , ) +
1 n α Γ α + 1 e μ 2 N + 2 N ! μ N e μ 2 n 1 β 1
D x α f 1 , ( , x ] + D x α f 2 , ( , x ] + D x α f 1 , [ x , ) + D x α f 2 , [ x , ) ,
(II) Given f 1 j x = f 2 j x = 0 , j = 1 , , N 1 , we have
B n f , x f x
1 n α β Γ α + 1 ω 1 D x α f 1 , 1 n β ( , x ] + ω 1 D x α f 2 , 1 n β ( , x ] +
ω 1 D x α f 1 , 1 n β [ x , ) + ω 1 D x α f 2 , 1 n β [ x , ) +
1 n α Γ α + 1 e μ 2 N + 2 N ! μ N e μ 2 n 1 β 1
D x α f 1 , ( , x ] + D x α f 2 , ( , x ] + D x α f 1 , [ x , ) + D x α f 2 , [ x , ) ,
and
(III)
B n f f
j = 1 N 1 f 1 j + f 2 j j ! 1 n β j + 1 n j e μ 2 j + 2 j ! μ j e μ 2 n 1 β 1 +
1 n α β Γ α + 1 sup x R ω 1 D x α f 1 , 1 n β ( , x ] + sup x R ω 1 D x α f 2 , 1 n β ( , x ] +
sup x R ω 1 D x α f 1 , 1 n β [ x , ) + sup x R ω 1 D x α f 2 , 1 n β [ x , ) +
1 n α Γ α + 1 e μ 2 N + 2 N ! μ N e μ 2 n 1 β 1
sup x R D x α f 1 , ( , x ] + sup x R D x α f 2 , ( , x ] +
sup x R D x α f 1 , [ x , ) + sup x R D x α f 2 , [ x , ) .
Proof. 
By Theorem 7. □
Conclusions: The author used parametrized Richard’s curve activated neural network approximations to differentiate functions from R into R , going beyond the bounded domain functions. He presented real and complex, ordinary and fractional quasi-interpolation quantitative approximations. The results are totally new.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Anastassiou, G.A. Rate of convergence of some neural network operators to the unit-univariate case. J. Math. Anal. Appl. 1997, 212, 237–262. [Google Scholar] [CrossRef]
  2. Anastassiou, G.A. Quantitative Approximations; Chapman & Hall/CRC: Boca Raton, FL, USA; New York, NY, USA, 2001. [Google Scholar]
  3. Chen, Z.; Cao, F. The approximation operators with sigmoidal functions. Comput. Math. Appl. 2009, 58, 758–765. [Google Scholar] [CrossRef]
  4. Anastassiou, G.A. Inteligent Systems: Approximation by Artificial Neural Networks; Intelligent Systems Reference Library; Springer: Berlin/Heidelberg, Germany, 2011; Volume 19. [Google Scholar]
  5. Anastassiou, G.A. Intelligent Systems II: Complete Approximation by Neural Network Operators; Springer: Berlin/Heidelberg, Germany; New York, NY, USA, 2016. [Google Scholar]
  6. Haykin, S. Neural Networks: A Comprehensive Foundation, 2nd ed.; Prentice Hall: New York, NY, USA, 1998. [Google Scholar]
  7. McCulloch, W.; Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 1943, 7, 115–133. [Google Scholar] [CrossRef]
  8. Mitchell, T.M. Machine Learning; WCB-McGraw-Hill: New York, NY, USA, 1997. [Google Scholar]
  9. Anastassiou, G.A. Parametrized, Deformed and General Neural Networks; Springer: Berlin/Heidelberg, Germany; New York, NY, USA, 2023. [Google Scholar]
  10. Richards, F.J. A Flexible Growth Function for Empirical Use. J. Exp. Bot. 1959, 10, 290–300. [Google Scholar] [CrossRef]
  11. Lee, S.Y.; Lei, B.; Mallick, B. Estimation of COVID-19 spread curves integrating global data and borrowing information. PLoS ONE 2020, 15, e0236860. [Google Scholar] [CrossRef]
  12. Diethelm, K. The Analysis of Fractional Differential Equations; Lecture Notes in Mathematics 2004; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  13. Frederico, G.S.; Torres, D.F.M. Fractional Optimal Control in the sense of Caputo and the fractional Noether’s theorem. Int. Math. Forum 2008, 3, 479–493. [Google Scholar]
  14. Samko, S.G.; Kilbas, A.A.; Marichev, O.I. Fractional Integrals and Derivatives, Theory and Applications; English translation from the Russian, Integrals and Derivatives of Fractional Order and Some of Their Applications (Nauka i Tekhnika, Minsk, 1987); Gordon and Breach: Amsterdam, The Netherlands, 1993. [Google Scholar]
  15. Anastassiou, G.A. Fractional Korovkin theory. Chaos Solitons Fractals 2009, 42, 2080–2094. [Google Scholar] [CrossRef]
  16. Anastassiou, G.A. On Right Fractional Calculus. Chaos Solitons Fractals 2009, 42, 365–376. [Google Scholar] [CrossRef]
  17. El-Sayed, A.M.A.; Gaber, M. On the finite Caputo and finite Riesz derivatives. Electron. J. Theor. Phys. 2006, 3, 81–95. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Anastassiou, G.A. Smooth Logistic Real and Complex, Ordinary and Fractional Neural Network Approximations over Infinite Domains. Axioms 2024, 13, 462. https://doi.org/10.3390/axioms13070462

AMA Style

Anastassiou GA. Smooth Logistic Real and Complex, Ordinary and Fractional Neural Network Approximations over Infinite Domains. Axioms. 2024; 13(7):462. https://doi.org/10.3390/axioms13070462

Chicago/Turabian Style

Anastassiou, George A. 2024. "Smooth Logistic Real and Complex, Ordinary and Fractional Neural Network Approximations over Infinite Domains" Axioms 13, no. 7: 462. https://doi.org/10.3390/axioms13070462

APA Style

Anastassiou, G. A. (2024). Smooth Logistic Real and Complex, Ordinary and Fractional Neural Network Approximations over Infinite Domains. Axioms, 13(7), 462. https://doi.org/10.3390/axioms13070462

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop