Next Article in Journal
Quality Control Testing with Experimental Practical Illustrations under the Modified Lindley Distribution Using Single, Double, and Multiple Acceptance Sampling Plans
Next Article in Special Issue
An Innovative Method for Deterministic Multifactor Analysis Based on Chain Substitution Averaging
Previous Article in Journal
MHD Thermal and Solutal Stratified Stagnation Flow of Tangent Hyperbolic Fluid Induced by Stretching Cylinder with Dual Convection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Neural Network Approximation for Time Splitting Random Functions

by
George A. Anastassiou
1,* and
Dimitra Kouloumpou
2
1
Department of Mathematical Sciences, University of Memphis, Memphis, TN 38152, USA
2
Section of Mathematics, Hellenic Naval Academy, 18539 Piraeus, Greece
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(9), 2183; https://doi.org/10.3390/math11092183
Submission received: 12 March 2023 / Revised: 26 April 2023 / Accepted: 4 May 2023 / Published: 5 May 2023
(This article belongs to the Special Issue New Advance of Mathematical Economics)

Abstract

:
In this article we present the multivariate approximation of time splitting random functions defined on a box or R N , N N , by neural network operators of quasi-interpolation type. We achieve these approximations by obtaining quantitative-type Jackson inequalities engaging the multivariate modulus of continuity of a related random function or its partial high-order derivatives. We use density functions to define our operators. These derive from the logistic and hyperbolic tangent sigmoid activation functions. Our convergences are both point-wise and uniform. The engaged feed-forward neural networks possess one hidden layer. We finish the article with a great variety of applications.

1. Introduction

The first author of [1,2], see Section 2, Section 3, Section 4 and Section 5, was first to establish neural network approximations of continuous functions with rates by specifically defined neural network operators of Cardaliaguet–Euvrard and ”squashing” types, employing the modulus of continuity of the engaged function or its high-order derivative to produce very tight Jackson-type inequalities. They treated both univariate and multivariate cases. The defining these operators, ”bell-shaped” and ”squashing” functions, are assumed to be of compact support. Furthermore, in [2] they give the Nth order asymptotic expansion for the weak approximation error of these two operators to a special natural class of smooth functions, see Section 4 and Section 5. Continuing on from the the work in [3], the author continued their studies on neural network approximation by introducing and using sigmoidal and hyperbolic tangent-type proper quasi-interpolation operators [4,5,6,7,8], treating both univariate and multivariate cases. Here, we are inspired by the work in [9,10]. In this article we study the quantitative neural network approximations to continuous functions on a box or R N , N N , by linear operators deriving by engaging the logistic and hyperbolic tangent sigmoid activation functions. The degrees of approximations are given with rates by the use of appropriately related moduli of continuity. We finish by providing applications of the work. We have been mostly motivated by:
  • Stationary Gaussian processes, represented by
    Y t = cos λ t ξ 1 + sin λ t ξ 2 , λ R ,
    where ξ 1 and ξ 2 are random variables, independent from the standard normal distribution, see [11].
  • The “Fourier model” of a stationary process, see [12].
The one hidden layer feed-forward neural networks (FNNs) used in this work can be mathematically expressed as networks as
M n x = j = 0 n λ j σ * z j · x + p j , x R m , m N ,
where for 0 j n , p j R are thresholds, z j R m are connecting weights, λ j R are coefficients, z j · x are in the inner product of z j and x, and σ * is the activation function in the network. In the vast literature of neural networks, various activation functions are used according to the needs of the study. Ours are very typical, the logistic and hyperbolic activation functions. For further general reading on neural networks one may read [13,14,15]. In [16], author presents a complete study of real-valued approximations by neural network approximations.

2. Background

2.1. Logistic Sigmoid Activation Function

Here, we follow [7], considering a logarithmic-type sigmoidal function
s i x i = 1 1 + e x i , x i R , i = 1 , . . . , N ; x : = x 1 , . . . , x N R N .
each with the properties lim x i + s i x i = 1 and lim x i s i x i = 0 , i = 1 , . . . , N .
These functions play the role of activation functions in the hidden layer of neural networks, with additional applications in biology, demography, etc. [17,18].
As in [3], we consider
Φ i x i : = 1 2 s i x i + 1 s i x i 1 , x i R , i = 1 , , N .
We notice the following properties:
(i)
Φ i x i > 0 , ∀ x i R ,
(ii)
k i = Φ i x i k i = 1 , ∀ x i R ,
(iii)
k i = Φ i n x i k i = 1 , ∀ x i R ; n N ,
(iv)
Φ i x i d x i = 1 ,
(v)
Φ i is a density function,
(vi)
Φ i is even: Φ i x i = Φ i x i , x i 0 , for i = 1 , . . . , N .
We see that
Φ i x i = e 2 1 2 e 2 1 1 + e x i 1 1 + e x i 1 , i = 1 , . . . , N .
(vii)
Φ i is decreasing for R + , and increasing for R , i = 1 , . . . , N .
Let 0 < β < 1 , n N . Then, as in [8], we obtain
(viii)
k i = : n x i k i > n 1 β Φ i n x i k i = k i = : n x i k i > n 1 β Φ i n x i k i
3.1992 e n 1 β , i = 1 , . . . , N .
Denote by · the ceiling of a number, and by · the integral part of a number. Consider x i = 1 N a i , b i R N , N N such that n a i n b i , i = 1 , . . . , N ; a : = a 1 , . . . , a N , b : = b 1 , . . . , b N .
We obtain
(ix)
0 < 1 k i = n a i n b i Φ i n x i k i < 1 Φ i 1 = 5.250312578 ,
x i a i , b i , i = 1 , . . . , N .
(x)
As in [8], we see that
lim n k i = n a i n b i Φ i n x i k i 1 ,
for at least some x i a i , b i , i = 1 , . . . , N .
We will use here
Φ x 1 , . . . , x N : = Φ x : = i = 1 N Φ i x i = : T 1 ( x ) , x R N .
It has the properties:
(i)’
Φ x > 0 , ∀ x R N ,
We see that
k 1 = k 2 = . . . k N = Φ x 1 k 1 , x 2 k 2 , . . . , x N k N =
k 1 = k 2 = . . . k N = i = 1 N Φ i x i k i = i = 1 N k i = Φ i x i k i = 1 .
That is
(ii)’
k = Φ x k : = k 1 = k 2 = . . . k N = Φ x 1 k 1 , . . . , x N k N = 1 ,
k : = k 1 , . . . , k n , ∀ x R N .
(iii)’
k = Φ n x k : =
k 1 = k 2 = . . . k N = Φ n x 1 k 1 , . . . , n x N k N = 1 ,
x R N ; n N .
(iv)’
R N Φ x d x = 1 ,
where Φ is a multivariate density function.
Here, x : = max x 1 , . . . , x N , x R N , and set : = , . . . , , : = , . . . , in the multivariate context, and
n a : = n a 1 , . . . , n a N , n b : = n b 1 , . . . , n b N .
For 0 < β < 1 and n N , fixed x R N gives
k = n a n b Φ n x k =
k = n a k n x 1 n β n b Φ n x k + k = n a k n x > 1 n β n b Φ n x k .
In the last two calculations the counting is over a disjoint vector of k, because the condition k n x > 1 n β implies that there exists at least one k r n x r > 1 n β , r 1 , . . . , N .
We treat
k = n a k n x > 1 n β n b Φ n x k = i = 1 N k i = n a i k n x > 1 n β n b i Φ i n x i k i
i = 1 i r N k i = Φ i n x i k i · k r = n a r k r n x r > 1 n β n b r Φ r n x r k r
= k r = n a r k r n x r > 1 n β n b r Φ r n x r k r
k r = k r n x r > 1 n β Φ r n x r k r ( by ( viii ) ) 3.1992 e n 1 β .
We have proven that
(v)’
k = n a k n x > 1 n β n b Φ n x k 3.1992 e n 1 β ,
0 < β < 1 , n N , x i = 1 N a i , b i .
By (ix), we obtain
0 < 1 k = n a n b Φ n x k = 1 i = 1 N k i = n a i n b i Φ i n x i k i
< 1 i = 1 N Φ i 1 = 5.250312578 N .
That is,
(vi)’
it holds
0 < 1 k = n a n b Φ n x k < 5.250312578 N ,
x i = 1 N a i , b i , n N .
It is clear that
(vii)’
k = k n x > 1 n β Φ n x k 3.1992 e n 1 β ,
0 < β < 1 , n N , x R N .
From (x), we see that
(viii)’
lim n k = n a n b Φ n x k 1
for some x i = 1 N a i , b i .
Let f C i = 1 N a i , b i and n N such that n a i n b i , i = 1 , . . . , N .
We introduce and define the multivariate positive linear neural network operator ( x : = x 1 , . . . , x N i = 1 N a i , b i )
G n f , x 1 , . . . , x N : = G n f , x : = k = n a n b f k n Φ n x k k = n a n b Φ n x k
: = k 1 = n a 1 n b 1 k 2 = n a 2 n b 2 . . . k N = n a N n b N f k 1 n , . . . , k N n i = 1 N Φ i n x i k i i = 1 N k i = n a i n b i Φ i n x i k i .
For large enough n we obtain n a i n b i , i = 1 , . . . , N . Furthermore, a i k i n b i , iff n a i k i n b i , i = 1 , . . . , N .
Here, we study the point-wise and uniform convergence of G n f to f with rates.
For convenience,
G n * f , x : = k = n a n b f k n Φ n x k
: = k 1 = n a 1 n b 1 k 2 = n a 2 n b 2 . . . k N = n a N n b N f k 1 n , . . . , k N n i = 1 N Φ i n x i k i ,
x i = 1 N a i , b i .
That is
G n f , x : = G n * f , x k = n a n b Φ n x k , x i = 1 N a i , b i , n N .
Hence,
G n f , x f x = G n * f , x f x k = n a n b Φ n x k k = n a n b Φ n x k .
Consequently, we derive
G n f , x f x 5.250312578 N G n * f , x f x k = n a n b Φ n x k ,
x i = 1 N a i , b i .
Now we estimate the right-hand side of Equations (21).
For this we need, for f C i = 1 N a i , b i the first multivariate modulus of continuity
ω 1 f , h : = sup x , y i = 1 N a i , b i x y h f x f y , h > 0 .
Similarly, f C B R N is defined (continuous and bounded functions of R N ). We have lim h 0 ω 1 f , h = 0 .
When f C B R N we define
G ¯ n f , x : = G ¯ n f , x 1 , . . . , x N : = k = f k n Φ n x k
: = k 1 = k 2 = . . . k N = f k 1 n , k 2 n , . . . , k N n i = 1 N Φ i n x i k i ,
n N , ∀ x R N , N 1 is and the multivariate quasi-interpolation neural network operator.
Notice here that for large enough n N we obtain
e n 1 β < n β j , j = 1 , . . . , m N , 0 < β < 1 .
Thus, given a fixed A , B > 0 , for the linear combination A n β j + B e n 1 β the (dominant) rate of convergence to zero is n β j . The closer β is to 1 a faster and better rate of convergence to zero is obtained.
Let f C m i = 1 N a i , b i , m , N N . Here, f α denotes a partial derivative of f, α : = α 1 , . . . , α N , α i Z + , i = 1 , . . . , N , and α : = i = 1 N α i = l , where l = 0 , 1 , . . . , m . We also write f α : = α f x α of order l.
We denote
ω 1 , m max f α , h : = max α : α = m ω 1 f α , h .
Call also
f α , m max : = max α = m f α ,
· is the supremum norm.
Next, we present a series of multivariate neural network approximations of a function given with rates.
We first give
Theorem 1. 
Let f C i = 1 N a i , b i , 0 < β < 1 , x i = 1 N a i , b i , n , N N . Then,
(i) 
G n f , x f x 5.250312578 N ·
ω 1 f , 1 n β + 6.3984 f e n 1 β = : λ 1 ,
(ii) 
G n f f λ 1 .
Next we present
Theorem 2 
([7]). Let f C B R N , 0 < β < 1 , x R N , n , N N . Then,
(i) 
G ¯ n f , x f x ω 1 f , 1 n β + 6.3984 f e n 1 β = : λ 2 ,
(ii) 
G ¯ n f f λ 2 .
Next we discuss high-order approximations by using the smoothness of f.
We give
Theorem 3 
([7]). Let f C m i = 1 N a i , b i , 0 < β < 1 , n , m , N N , x i = 1 N a i , b i . Then,
(i) 
G n f , x f x j = 1 m α = j f α x i = 1 N α i ! G n i = 1 N · x i α i , x
5.250312578 N · N m m ! n m β ω 1 , m max f α , 1 n β +
6.3984 b a m f α , m max N m m ! e n 1 β ,
(ii) 
G n f , x f x 5.250312578 N ·
j = 1 m α = j f α x i = 1 N α i ! 1 n β j + i = 1 N b i a i α i · 3.1992 e n 1 β +
N m m ! n m β ω 1 , m max f α , 1 n β + 6.3984 b a m f α , m max N m m ! e n 1 β ,
(iii) 
G n f f 5.250312578 N ·
j = 1 N α = j f α i = 1 N α i ! 1 n β j + i = 1 N b i a i α i 3.1992 e n 1 β +
N m m ! n m β ω 1 , m max f α , 1 n β + 6.3984 b a m f α , m max N m m ! e n 1 β ,
(iv) 
Assume f α x 0 = 0 , for all α : α = 1 , . . . , m ; x 0 i = 1 N a i , b i . Then
G n f , x 0 f x 0 5.250312578 N ·
N m m ! n m β ω 1 max f α , 1 n β + 6.3984 b a m f α , m max N m m ! e n 1 β ,
notice in the last equation the extremely high rate of convergence at n β m + 1 .

2.2. Hyperbolic Tangent Sigmoid Activation Function

We now consider the hyperbolic tangent function tanh x , x R :
tanh x : = e x e x e x + e x .
It has the properties tanh 0 = 0 , 1 < tanh x < 1 , ∀ x R , and tanh x = tanh x . Furthermore, tanh x 1 as x , and tanh x 1 , as x , and it is strictly increasing for R .
This function plays the role of an activation function in the hidden layer of neural networks.
We further consider
Ψ x : = 1 4 tanh x + 1 tanh x 1 > 0 , x R .
We easily see that Ψ x = Ψ x , that is Ψ is even for R . Therefore, Ψ is differentiable, and thus continuous.
Proposition 1 
([4]). Ψ x for x 0 is strictly decreasing.
Therefore, Ψ x is strictly increasing for x 0 . In addition, it holds that lim x Ψ x = 0 = lim x Ψ x .
In fact, Ψ has a bell shape with a horizontal asymptote to the x-axis. Therefore, the maximum of Ψ is zero, Ψ 0 = 0.3809297 .
Theorem 4 
([4]). We have i = Ψ x i = 1 , ∀ x R .
Thus,
i = Ψ n x i = 1 , n N , x R .
Furthermore, it holds
i = Ψ x + i = 1 , x R .
Theorem 5 
([4]). It holds
Ψ x d x = 1 .
Therefore, Ψ x is a density function for R .
Theorem 6 
([4]). Let 0 < α < 1 and n N . It holds that
k = : n x k n 1 α Ψ n x k e 4 · e 2 n 1 α .
Denote by · the integral part of the number and by · the ceiling of the number.
Theorem 7 
([4]). Let x a , b R and n N so that n a n b . It holds that
1 k = n a n b Ψ n x k < 1 Ψ 1 = 4.1488766 .
Furthermore, by [4], we obtain
lim n k = n a n b Ψ n x k 1 ,
for at least some x a , b .
In this article we use
Θ x 1 , . . . , x N : = Θ x : = i = 1 N Ψ x i , x = x 1 , . . . , x N = T 2 ( x ) R N , N N .
It has the properties:
(i)
Θ x > 0 , ∀ x R N ,
(ii)
k = Θ x k : = k 1 = k 2 = . . . k N = Θ x 1 k 1 , . . . , x N k N = 1 ,
where k : = k 1 , . . . , k n , ∀ x R N .
(iii)
k = Θ n x k : =
k 1 = k 2 = . . . k N = Θ n x 1 k 1 , . . . , n x N k N = 1 ,
x R N ; n N .
(iv)
R N Θ x d x = 1 ,
where Θ is a multivariate density function.
Therefore, we see that
k = n a n b Θ n x k = k = n a n b i = 1 N Ψ n x i x i =
k 1 = n a 1 n b 1 . . . k N = n a N n b N i = 1 N Ψ n x i k i = i = 1 N k i = n a i n b i Ψ n x i k i .
For 0 < β < 1 and n N , for a fixed x R N , we have
k = n a n b Θ n x k =
k = n a k n x 1 n β n b Θ n x k + k = n a k n x > 1 n β n b Θ n x k .
In the last two equations the counting is over a disjoint vector sets of k, because the condition k n x > 1 n β implies that there exists at least one k r n x r > 1 n β , r 1 , . . . , N .
We treat
k = n a k n x > 1 n β n b Θ n x k = i = 1 N k i = n a i k n x > 1 n β n b i Ψ n x i k i
i = 1 i r N k i = Ψ n x i k i · k r = n a r k r n x r > 1 n β n b r Ψ n x r k r
= k r = n a r k r n x r > 1 n β n b r Ψ n x r k r
k r = k r n x r > 1 n β Ψ n x r k r ( by Theorem 6 ) e 4 · e 2 n 1 β .
We have proven that
(v)
k = n a k n x > 1 n β n b Θ n x k e 4 · e 2 n 1 β ,
0 < β < 1 , n N , x i = 1 N a i , b i .
By Theorem 7, we clearly obtain
0 < 1 k = n a n b Θ n x k = 1 i = 1 N k i = n a i n b i Ψ n x i k i
< 1 Ψ 1 N = 4.1488766 N .
That is,
(vi)
it holds that
0 < 1 k = n a n b Θ n x k < 1 Ψ 1 N = 4.1488766 N ,
x i = 1 N a i , b i , n N .
It is also clear that
(vii)
k = k n x > 1 n β Θ n x k e 4 · e 2 n 1 β ,
0 < β < 1 , n N , x R N .
Furthermore, we obtain
lim n k = n a n b Θ n x k 1 ,
for at least some x i = 1 N a i , b i .
Let f C i = 1 N a i , b i and n N such that n a i n b i , i = 1 , . . . , N .
We introduce and define the multivariate positive linear neural network operator ( x : = x 1 , . . . , x N i = 1 N a i , b i )
F n f , x 1 , . . . , x N : = F n f , x : = k = n a n b f k n Θ n x k k = n a n b Θ n x k
: = k 1 = n a 1 n b 1 k 2 = n a 2 n b 2 . . . k N = n a N n b N f k 1 n , . . . , k N n i = 1 N Ψ n x i k i i = 1 N k i = n a i n b i Ψ n x i k i .
For large enough n we always obtain n a i n b i , i = 1 , . . . , N . Furthermore, a i k i n b i , iff n a i k i n b i , i = 1 , . . . , N .
Here, we study the point-wise and uniform convergence of F n f to f with rates.
For convenience we call
F n * f , x : = k = n a n b f k n Θ n x k
: = k 1 = n a 1 n b 1 k 2 = n a 2 n b 2 . . . k N = n a N n b N f k 1 n , . . . , k N n i = 1 N Ψ n x i k i ,
x i = 1 N a i , b i .
That is
F n f , x : = F n * f , x k = n a n b Θ n x k ,
x i = 1 N a i , b i , n N .
Hence,
F n f , x f x = F n * f , x f x k = n a n b Θ n x k k = n a n b Θ n x k .
Consequently, we derive
F n f , x f x 4.1488766 N F n * f , x f x k = n a n b Θ n x k ,
x i = 1 N a i , b i . When f C B R N we define,
F ¯ n f , x : = F ¯ n f , x 1 , . . . , x N : = k = f k n Θ n x k : =
k 1 = k 2 = . . . k N = f k 1 n , k 2 n , . . . , k N n i = 1 N Ψ n x i k i ,
n N , ∀ x R N , N 1 , is the multivariate quasi-interpolation neural network operator. Here, we present a series of multivariate neural network approximations of a function given with rates. We first give
Theorem 8. 
Let f C i = 1 N a i , b i , 0 < β < 1 , x i = 1 N a i , b i , n , N N . Then,
(i) 
F n f , x f x 4.1488766 N ·
ω 1 f , 1 n β + 2 e 4 f e 2 n 1 β = : λ 1 ,
(ii) 
F n f f λ 1 .
Next, we present
Theorem 9. 
Let f C B R N , 0 < β < 1 , x R N , n , N N . Then,
(i) 
F ¯ n f , x f x ω 1 f , 1 n β + 2 e 4 f e 2 n 1 β = : λ 2 ,
(ii) 
F ¯ n f f λ 2 .
Next, we discuss high-order approximations by using the smoothness of f. We give
Theorem 10. 
Let f C m i = 1 N a i , b i , 0 < β < 1 , n , m , N N , x i = 1 N a i , b i . Then,
(i) 
F n f , x f x j = 1 m α = j f α x i = 1 N α i ! F n i = 1 N · x i α i , x
4.1488766 N · N m m ! n m β ω 1 , m max f α , 1 n β +
2 e 4 b a m f α , m max N m m ! e 2 n 1 β ,
(ii) 
F n f , x f x 4.1488766 N ·
j = 1 m α = j f α x i = 1 N α i ! 1 n β j + i = 1 N b i a i α i · e 4 e 2 n 1 β +
N m m ! n m β ω 1 , m max f α , 1 n β + 2 e 4 b a m f α , m max N m m ! e 2 n 1 β ,
(iii) 
F n f f 4.1488766 N ·
j = 1 m α = j f α i = 1 N α i ! 1 n β j + i = 1 N b i a i α i e 4 e 2 n 1 β +
N m m ! n m β ω 1 , m max f α , 1 n β + 2 e 4 b a m f α , m max N m m ! e 2 n 1 β ,
(iv) 
Assume f α x 0 = 0 , for all α : α = 1 , . . . , m ; x 0 i = 1 N a i , b i . Then
F n f , x 0 f x 0 4.1488766 N ·
N m m ! n m β ω 1 max f α , 1 n β + 2 e 4 b a m f α , m max N m m ! e 2 n 1 β ,
Notice in the latter the extremely high rate of convergence at n β m + 1 .

2.3. Combining Section 2.1 and Section 2.2

For the next theorems we call
T 1 ( x ) : = Φ ( x )
T 2 ( x ) : = Θ ( x ) , x R N , N N .
We also set
γ 1 , N : = 5.250312578 N ,
γ 2 , N : = 4.1488766 N .
Furthermore, we set
α 1 , n ( β ) : = 3.1992 e n 1 β ,
α 2 , n ( β ) : = e 4 e 2 n 1 β ,
where 0 < β < 1 , n N .
We define,
1 L n f , x : = G n f , x ,
2 L n f , x : = F n f , x , x R N , n N
and
1 L ¯ n f , x : = G ¯ n f , x ,
2 L ¯ n f , x : = F ¯ n f , x , x R N , n N .
Notice that
6.3984 e n 1 β = 2 a 1 , n ( β ) .
Theorem 11. 
Let f C i = 1 N a i , b i , 0 < β < 1 , x i = 1 N a i , b i , n , N N and k = 1 , 2 . Then,
(i) 
k L n f , x f x γ k , N
ω 1 f , 1 n β + 2 α k , n ( β ) f = : λ 1 ,
(ii) 
k L n f f λ 1 .
Proof. 
From Theorems 1 and 8. □
Next, we present
Theorem 12. 
Let f C B R N , 0 < β < 1 , x R N , n , N N and k = 1 , 2 . Then,
(i) 
k L ¯ n f , x f x ω 1 f , 1 n β + 2 α k , n β f = : λ 2 ,
(ii) 
k L ¯ n f f λ 2 .
Proof. 
From Theorems 2 and 9. □
Next, we discuss high-order approximations by using the smoothness of f.
We give
Theorem 13. 
Let f C m i = 1 N a i , b i , 0 < β < 1 , n , m , N N , x i = 1 N a i , b i and k = 1 , 2 . Then,
(i) 
k L n f , x f x j = 1 m α = j f α x i = 1 N α i ! k L n i = 1 N · x i α i , x
γ k , N · N m m ! n m β ω 1 , m max f α , 1 n β +
2 α k , n b a m f α , m max N m m ! ,
(ii) 
k L n f , x f x γ k , N ·
j = 1 m α = j f α x i = 1 N α i ! 1 n β j + i = 1 N b i a i α i · α k , n β +
N m m ! n m β ω 1 , m max f α , 1 n β + 2 α k , n β b a m f α , m max N m m ! ,
(iii) 
k L n f f γ k , N ·
j = 1 N α = j f α i = 1 N α i ! 1 n β j + i = 1 N b i a i α i α k , n β +
N m m ! n m β ω 1 , m max f α , 1 n β + 2 α k , n β b a m f α , m max N m m ! ,
(iv) 
Assume f α x 0 = 0 , for all α : α = 1 , . . . , m ; x 0 i = 1 N a i , b i . Then,
k L n f , x 0 f x 0 γ k , N ·
N m m ! n m β ω 1 max f α , 1 n β + 2 α k , n β b a m f α , m max N m m ! ,
Notice in the later the extremely high rate of convergence at n β m + 1 .
Proof. 
From Theorems 3 and 10. □
Next, we apply Theorem 13 for m = 1 .
Corollary 1. 
Let f C 1 i = 1 N a i , b i , 0 < β < 1 , n , m , N N , x i = 1 N a i , b i and k = 1 , 2 . Then,
(i) 
k L n f , x f x i = 1 N f x i k L n · x i , x
γ k , N · N n β max i { 1 , , N } ω 1 f x i , 1 n β +
2 α k , n N b a max i { 1 , , N } f x i ,
(ii) 
k L n f , x f x γ k , N ·
i = 1 N f ( x ) x i 1 n β + b i a i · α k , n β +
N n β max i { 1 , , N } ω 1 f x i , 1 n β + 2 α k , n β N b a max i { 1 , , N } f x i ,
(iii) 
k L n f f γ k , N ·
i = 1 N f x i 1 n β + b i a i α k , n β +
N n β max i { 1 , , N } ω 1 f x i , 1 n β + 2 α k , n β N b a max i { 1 , , N } f x i .
(iv) 
Assume f x i x 0 = 0 , i = 1 , , N , where x 0 i = 1 N a i , b i . Then
k L n f , x 0 f x 0 γ k , N ·
N n β max i { 1 , , N } ω 1 f x i , 1 n β + 2 α k , n β N b a max i { 1 , , N } f x i ,
Notice in the latter the extremely high rate of convergence at n 2 β .

3. Multiple Time-Separating Random Functions

Let Ω , F , P be a probability space, ω Ω ; Y 1 , Y 2 , , Y m , m N , be real-valued random variables for Ω with finite expectations, and h 1 ( t ) , h 2 ( t ) , h m ( t ) : j = 1 N I j R , where I j are infinite subsets of R for every j = 1 , , N . Typically, I j is an infinite length interval of R , usually I j = R or I j = R + , for j = 1 , , N .
Clearly,
Y t , ω : = i = 1 m h i ( t ) Y i ( ω ) , t j = 1 N I j ,
is a common time-separating random function.
We can assume that h i C r ( j = 1 N I j ) , i = 1 , 2 , . . . , m ; r N . Consequently, we have the expectation
E Y t = i = 1 m h i ( t ) E Y i C j = 1 N I j or C r j = 1 N I j .
A classical example of a multiple time-separating process is
sin j = 1 N t j Y 1 ( ω ) + cos j = 1 N t j Y 2 ( ω ) , t j I j ,
for j = 1 , , N .
Notice that sin j = 1 N t j 1 and cos j = 1 N t j 1 .
Another typical example is
sinh j = 1 N t j Y 1 ( ω ) + cosh j = 1 N t j Y 2 ( ω ) , t j I j , for j = 1 , , N .
In this article we will apply the main results of Section 2.3, to f ( t ) = E Y ( t ) . We conclude with several applications.

4. Main Results

We present the following stochastic approximation result.
Theorem 14. 
Let E Y t as in (80), t j = 1 N I j . Here I j = t 1 , j , t 2 , j , where t 1 , j , t 2 , j R with t 1 , j < t 2 , j , for every j = 1 , , N . Let also 0 < β < 1 , n , N N and k = 1 , 2 . Then,
(i) 
k L n E Y , t E Y t γ k , N
ω 1 E Y , 1 n β + 2 α k , n ( β ) E Y = : λ 1 ,
(ii) 
k L n E Y E Y λ 1 .
Proof. 
From Theorem 11. □
The counter part of the previous theorem follows.
Theorem 15. 
Let E Y t as in (80), h i C B R N for every i = 1 , , m . Let also 0 < β < 1 , n , N N and k = 1 , 2 . Then,
(i) 
k L ¯ n E Y , t E Y t ω 1 E Y , 1 n β + 2 α k , n β E Y = : λ 2 ,
(ii) 
k L ¯ n E Y E Y λ 2 .
Proof. 
From Theorem 12. □
We give
Theorem 16. 
Let E Y t as in (80), E Y t C m j = 1 N I j , t j = 1 N I j . Here, I j = t 1 , j , t 2 , j , where t 1 , j , t 2 , j R with t 1 , j < t 2 , j , for j = 1 , , N , t 1 = t 1 , 1 , , t 1 , N , t 2 = t 2 , 1 , , t 2 , N . Let also 0 < β < 1 , n , N N and k = 1 , 2 . Then,
(i) 
k L n E Y , t E Y t j * = 1 m α = j * E Y α t i = 1 N α i ! k L n i = 1 N · x i α i , x
γ k , N · N m m ! n m β ω 1 , m max E Y α , 1 n β +
2 α k , n t 2 t 1 m E Y α , m max N m m ! ,
(ii) 
k L n E Y , t E Y t γ k , N ·
j * = 1 m α = j * E Y α t i = 1 N α i ! 1 n β j * + i = 1 N t 2 , i t 1 , i α i · α k , n β +
N m m ! n m β ω 1 , m max E Y α , 1 n β + 2 α k , n β t 2 t 1 m E Y α , m max N m m ! ,
(iii) 
k L n E Y E Y γ k , N ·
j * = 1 N α = j * E Y α i = 1 N α i ! 1 n β j * + i = 1 N t 2 , i t 1 , i α i α k , n β +
N m m ! n m β ω 1 , m max f α , 1 n β + 2 α k , n β t 2 t 1 m f α , m max N m m ! ,
(iv) 
Assume E Y α t 0 = 0 , for all α : α = 1 , . . . , m ; x 0 i = 1 N t 1 , i , t 2 , i . Then
k L n E Y , t 0 E Y t 0 γ k , N ·
N m m ! n m β ω 1 max E Y α , 1 n β + 2 α k , n β t 2 t 1 m f α , m max N m m ! ,
Notice in the latter the extremely high rate of convergence at n β m + 1 .
Proof. 
From Theorem 13. □
Next, we apply Theorem 16 for m = 1 .
Corollary 2. 
Let E Y t as in (80), E Y t C 1 j = 1 N I j , t j = 1 N I j . Here, I j = t 1 , j , t 2 , j , where t 1 , j , t 2 , j R with t 1 , j < t 2 , j for j = 1 , , N , t 1 = t 1 , 1 , , t 1 , N , t 2 = t 2 , 1 , , t 2 , N . Let also 0 < β < 1 , n , m , N N , and k = 1 , 2 . Then,
(i) 
k L n E Y , t E Y t i = 1 N E Y t i k L n · t i , t
γ k , N · N n β max i { 1 , , N } ω 1 E Y t i , 1 n β +
2 α k , n N t 2 t 1 max i { 1 , , N } E Y t i ,
(ii) 
k L n E Y , t E Y t γ k , N ·
i = 1 N E Y ( t ) t i 1 n β + t 2 , i t 1 , i · α k , n β +
N n β max i { 1 , , N } ω 1 E Y t i , 1 n β + 2 α k , n β N t 2 t 1 max i { 1 , , N } E Y t i ,
(iii) 
k L n E Y E Y γ k , N ·
i = 1 N E Y t i 1 n β + t 2 , i t 1 , i α k , n β +
N n β max i { 1 , , N } ω 1 E Y t i , 1 n β + 2 α k , n β N t 2 t 1 max i { 1 , , N } E Y t i .
(iv) 
Assume E Y t i t 0 = 0 , i = 1 , , N , where t 0 i = 1 N t 1 , i , t 2 , i . Then
k L n E Y , t 0 E Y t 0 γ k , N ·
N n β max i { 1 , , N } ω 1 E Y t i , 1 n β + 2 α k , n β N t 2 t 1 max i { 1 , , N } E Y t i ,
Notice in the latter the extremely high rate of convergence at n 2 β .

5. Applications

For applications, we consider Ω , F , P as a probability space and Y 0 , Y 1 and Y 2 as real-valued random variables of Ω with finite expectations. We consider the stochastic processes Z i ( t , ω ) for i = 1 , 2 , , 7 , where t = t 1 , , t N R N and ω Ω as follows:
Z 1 ( t , ω ) = j = 1 N t j t j , 0 μ + 1 + 1 Y 0 ( ω ) ,
where t 0 = t 1 , 0 , , t N , 0 R N and μ N are fixed;
Z 2 ( t , ω ) = sin ξ j = 1 N t j Y 1 ( ω ) + cos ξ j = 1 N t j Y 2 ( ω ) ,
where ξ > 0 is fixed;
Z 3 ( t , ω ) = sinh μ j = 1 N t j Y 1 ( ω ) + cosh μ j = 1 N t j Y 2 ( ω ) ,
where μ > 0 is fixed;
Z 4 ( t , ω ) = s e c h μ j = 1 N t j Y 1 ( ω ) + tanh μ j = 1 N t j Y 2 ( ω ) ,
where μ > 0 is fixed.
Here, s e c h x : = 1 cosh j = 1 N x j = 2 exp j = 1 N x j + exp j = 1 N x j , x = x 1 , , x n R N .
Z 5 ( t , ω ) = exp 1 j = 1 N t j Y 1 ( ω ) + exp 2 j = 1 N t j Y 2 ( ω ) ,
where 1 , 2 > 0 are fixed;
Z 6 ( t , ω ) = 1 1 + exp 1 j = 1 N t j Y 1 ( ω ) + 1 1 + exp 2 j = 1 N t j Y 2 ( ω ) ,
where 1 , 2 > 0 are fixed;
Z 7 ( t , ω ) = e e μ 1 j = 1 N t j Y 1 ( ω ) + e e μ 2 j = 1 N t j Y 2 ( ω ) ,
where μ 1 , μ 2 > 0 are fixed;
The expectations of Z i , i = 1 , 2 , . . . , 7 are
E Z 1 ( t ) = j = 1 N t j t j , 0 μ + 1 + 1 E ( Y 0 ) ,
E Z 2 ( t ) = sin ξ j = 1 N t j E ( Y 1 ) + cos ξ j = 1 N t j E ( Y 2 ) ,
E Z 3 ( t ) = sinh μ j = 1 N t j E ( Y 1 ) + cosh μ j = 1 N t j E ( Y 2 ) ,
E Z 4 ( t ) = s e c h μ j = 1 N t j E ( Y 1 ) + tanh μ j = 1 N t j E ( Y 2 ) ,
E Z 5 ( t ) = exp 1 j = 1 N t j E ( Y 1 ) + exp 2 j = 1 N t j E ( Y 2 ) ,
E Z 6 ( t ) = 1 1 + exp 1 j = 1 N t j E ( Y 1 ) + 1 1 + exp 2 j = 1 N t j E ( Y 2 ) ,
E Z 7 ( t ) = e e μ 1 j = 1 N t j E ( Y 1 ) + e e μ 2 j = 1 N t j E ( Y 2 ) ,
Next, E Z i ( t ) and i = 1 , 2 , , 7 are as defined in relations between (101) and (107), respectively.
We present the results in the following.
Proposition 2. 
Let t j = 1 N I j . Here, I j = t 1 , j , t 2 , j , where t 1 , j , t 2 , j R with t 1 , j < t 2 , j , for every j = 1 , , N . Let also 0 < β < 1 , n , N N and k = 1 , 2 . Then, for i = 1 , 2 , , 7
(i) 
k L n E Z i , t E Z i t γ k , N
ω 1 E Z i , 1 n β + 2 α k , n ( β ) E Z i = : λ 1 ,
(ii) 
k L n E Z i E Z i λ 1 .
Proof. 
From Theorem 14. □
Next, we present.
Proposition 3. 
Let 0 < β < 1 , n , N N and k = 1 , 2 . Then, for i 2 , 4 , 6 , 7 ,
(i) 
k L ¯ n E Z i , t E Z i t ω 1 E Z i , 1 n β + 2 α k , n β E Z i = : λ 2 ,
(ii) 
k L ¯ n E Z i E Z i λ 2 .
Proof. 
Notice that for every t R we have:
  • for Z 2 t , ω , sin ξ i = 1 N t i 1 and cos ξ i = 1 N t i 1 ,
  • for Z 4 t , ω , s e c h μ i = 1 N t i 1 and t a n h μ i = 1 N t i 1 ,
  • for Z 6 t , ω , 0 < 1 1 + exp 1 i = 1 N t i < 1 and 0 < 1 1 + exp 2 i = 1 N t i < 1 ,
  • for Z 7 t , ω , 0 < e e μ 1 i = 1 N t i < 1 and 0 < e e μ 2 i = 1 N t i < 1 .
Thus, the results from Theorem 16 are □
Proposition 4. 
Let t i = j N I j . Here I j = t 1 , j , t 2 , j , where t 1 , j , t 2 , j R with t 1 , j < t 2 , j for j = 1 , , N , t 1 = t 1 , 1 , , t 1 , N , t 2 = t 2 , 1 , , t 2 , N . Let also 0 < β < 1 , n , m , N N , and k = 1 , 2 . Then, for i = 1 , , 7
(i) 
k L n E Z i , t E Z i t j = 1 N E Z i t j k L n · t j , t
γ k , N · N n β max j { 1 , , N } ω 1 E Z i t j , 1 n β +
2 α k , n N t 2 t 1 max j { 1 , , N } E Z i t j .
(ii) 
Assume E Z i t j t 0 = 0 , j = 1 , , N , where t 0 j = 1 N t 1 , j , t 2 , j . Then,
k L n E Z i , t 0 E Z i t 0 γ k , N ·
N n β max j { 1 , , N } ω 1 E Z i t j , 1 n β + 2 α k , n β N t 2 t 1 max j { 1 , , N } E Z i t j ,
Notice in the latter the extremely high rate of convergence at n 2 β .
Proof. 
From Corollary 2. □

6. Specific Applications

Let Ω , F , P , where Ω is the set of non-negative integers, be a probability space, Y 1 , 1 and Y 2 , 1 be real-valued random variables on Ω following Poisson distributions with parameters λ 1 and λ 2 0 , , respectively.
We consider the stochastic process W 1 ( t , ω ) , where t = t 1 , , t N R N and ω Ω as follows:
W 1 ( t , ω ) = sin ξ j = 1 N t j Y 1 , 1 ( ω ) + cos ξ j = 1 N t j Y 2 , 1 ( ω ) ,
where ξ > 0 is fixed.
Since E Y 1 , 1 = λ 1 and E Y 2 , 1 = λ 2 , the expectations of Z 2 , 1 are
E W 1 ( t ) = sin ξ j = 1 N t j λ 1 + cos ξ j = 1 N t j λ 2 ,
Furthermore, for j * = 1 , , N we have
E W 1 ( t ) t j * = ξ cos ξ j = 1 N t j λ 1 sin ξ j = 1 N t j λ 2 = : D 1 ( t ) .
Notice that
E W 1 ( t ) t j * = E W 1 ( t ) t i * = D 1 ( t )
for every j * , i * = 1 , , N .
For the next we consider Ω , F , P , where Ω = R is a probability space, with Y 1 , 2 and Y 2 , 2 of Ω following Gaussian distributions with expectations μ ^ 1 and μ ^ 2 R , respectively. We consider the stochastic process W 2 ( t , ω ) , where t = t 1 , , t N R N and ω Ω as follows:
W 2 ( t , ω ) = s e c h μ j = 1 N t j Y 1 , 2 ( ω ) + tanh μ j = 1 N t j Y 2 , 2 ( ω ) ,
where μ > 0 is fixed.
Since E Y 1 , 1 = μ ^ 1 and E Y 2 , 1 = μ ^ 2 , the expectations of Z 2 , 1 are
E W 2 ( t ) = μ ^ 1 s e c h μ j = 1 N t j + μ ^ 2 tanh μ j = 1 N t j ,
Furthermore, for j * = 1 , , N we have that
E W 2 ( t ) t j * = μ μ ^ 1 s e c h μ j = 1 N t j tanh μ j = 1 N t j + μ ^ 2 tanh 2 μ j = 1 N t j 1 = : D 2 ( t )
Notice that
E W 2 ( t ) t j * = E W 2 ( t ) t i * = D 2 ( t )
for every j * , i * = 1 , , N .
Next, we consider Ω , F , P , where Ω = [ 0 , ) , be a probability space, Y 1 , 3 , Y 2 , 3 be real-valued random variables of Ω following Weibull distributions with scale parameters 1 and shape parameters γ 1 and γ 2 0 , , respectively.
We consider the stochastic process W 3 ( t , ω ) , where t = t 1 , , t N R N and ω Ω as follows:
W 3 ( t , ω ) = 1 1 + exp 1 j = 1 N t j Y 1 , 3 ( ω ) + 1 1 + exp 2 j = 1 N t j Y 2 , 3 ( ω ) ,
where 1 , 2 > 0 are fixed;
Since E Y 1 , 3 = Γ 1 + 1 γ 1 and E Y 2 , 3 = Γ 1 + 1 γ 2 , where Γ · is the Gamma function, The expectations of Z i , 3 , i = 1 , 2 , 3 , 5 , are
E W 3 ( t ) = Γ 1 + 1 γ 1 1 1 + exp 1 j = 1 N t j + Γ 1 + 1 γ 2 1 1 + exp 2 j = 1 N t j ,
Furthermore, for j = 1 , , N we have that
r E W 3 ( t ) t j = r = 1 , r j N t r 1 Γ 1 + 1 γ 1 exp 1 r = 1 N t r 1 + exp 1 r = 1 N t r 2 + 2 Γ 1 + 1 γ 2 exp 2 r = 1 N t r 1 + exp 2 r = 1 N t r 2
Notice that
E W 3 ( t ) t j = i = 1 , i j N t i D 3 ( t )
for every j = 1 , , N , where
D 3 ( t ) = : 1 Γ 1 + 1 γ 1 exp 1 j = 1 N t j 1 + exp 1 j = 1 N t j 2 + 2 Γ 1 + 1 γ 2 exp 2 j = 1 N t j 1 + exp 2 j = 1 N t j 2
We present the results in the following.
Proposition 5. 
Let t j = 1 N I j . Here, I j = t 1 , j , t 2 , j , where t 1 , j , t 2 , j R with t 1 , j < t 2 , j , for every j = 1 , , N . Let also 0 < β < 1 , n , N N and k = 1 , 2 . Then for i = 1 , 2 , 3 ,
(i) 
k L n E W i , t E W i t γ k , N
ω 1 E W i , 1 n β + 2 α k , n ( β ) E W i = : λ 1 ,
(ii) 
k L n E W i E W i λ 1 .
Proof. 
From Proposition 2. □
Next, we present
Proposition 6. 
Let 0 < β < 1 , n , N N and k = 1 , 2 . Then for i = 1 , 2 , 3 ,
(i) 
k L ¯ n E W i , t E W i t ω 1 E W i , 1 n β + 2 α k , n β E W i = : λ 2 ,
(ii) 
k L ¯ n E W i E W i λ 2 .
Proof. 
From Proposition 3. □
Proposition 7. 
Let t i = j N I j . Here, I j = t 1 , j , t 2 , j , where t 1 , j , t 2 , j R with t 1 , j < t 2 , j for j = 1 , , N , t 1 = t 1 , 1 , , t 1 , N , t 2 = t 2 , 1 , , t 2 , N . Let also 0 < β < 1 , n , m , N N , and k = 1 , 2 . Then for i = 1 , 2 ,
(i) 
k L n E W i , t E W i t D i ( t ) j = 1 N k L n · t j , t
γ k , N · N n β ω 1 D i ( t ) , 1 n β + 2 α k , n N t 2 t 1 D i ( t ) ,
(ii) 
Assume D i t 0 = 0 , where t 0 j = 1 N t 1 , j , t 2 , j . Then,
k L n E W i , t 0 E W i t 0 γ k , N ·
N n β ω 1 D i ( t ) , 1 n β + 2 α k , n β N t 2 t 1 D i ( t ) ,
Notice in the latter the extremely high rate of convergence at n 2 β .
Proof. 
From Proposition 4. □
Proposition 8. 
Let t i = j N I j . Here I j = t 1 , j , t 2 , j , where t 1 , j , t 2 , j R with t 1 , j < t 2 , j for j = 1 , , N , t 1 = t 1 , 1 , , t 1 , N , t 2 = t 2 , 1 , , t 2 , N . Let also 0 < β < 1 , n , m , N N , and k = 1 , 2 . Then
(i) 
k L n E W 3 , t E W 3 t D 3 ( t ) j = 1 N i = 1 , i j N t i k L n · t j , t
γ k , N · N n β max j { 1 , , N } ω 1 D 3 ( t ) i = 1 , i j N t i , 1 n β +
2 α k , n N t 2 t 1 max j { 1 , , N } D 3 ( t ) i = 1 , i j N t i ,
(ii) 
Assume that there exist t 0 = t 0 , 1 , t 0 , 2 , , t 0 , N , where t 0 j = 1 N t 1 , j , t 2 , j , such that t 0 , k = t 0 , l = 0 , for k , l 1 , , N , with k l . Then,
k L n E W 3 , t 0 E W 3 t 0 γ k , N ·
N n β max j { 1 , , N } ω 1 D 3 ( t ) i = 1 , i j N t j , 1 n β + 2 α k , n β N t 2 t 1 max j { 1 , , N } E W 3 t j ,
Notice in the latter the extremely high rate of convergence at n 2 β .
Proof. 
From Proposition 4. □
For the latest developments in deep learning see [19].
For work concerning neural network applications in biology, environmental science, and demography, see [20,21].

Author Contributions

Conceptualization, G.A.A.; Writing—original draft, G.A.A. and D.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Anastassiou, G.A. Rate of convergence of some neural network operators to the unit-univariate case. J. Math. Anal. Appl. 1997, 212, 237–262. [Google Scholar] [CrossRef]
  2. Anastassiou, G.A. Quantitative Approximations; Chapman & Hall/CRC: Boca Raton, NY, USA, 2001. [Google Scholar]
  3. Chen, Z.; Cao, F. The approximation operators with sigmoidal functions. Comput. Math. Appl. 2009, 58, 758–765. [Google Scholar] [CrossRef]
  4. Anastassiou, G.A. Univariate hyperbolic tangent neural network approximation. Math. Comput. Model. 2011, 53, 1111–1132. [Google Scholar] [CrossRef]
  5. Anastassiou, G.A. Multivariate hyperbolic tangent neural network approximation. Comput. Math. 2011, 61, 809–821. [Google Scholar]
  6. Anastassiou, G.A. Multivariate sigmoidal neural network approximation. Neural Netw. 2011, 24, 378–386. [Google Scholar] [CrossRef] [PubMed]
  7. Anastassiou, G.A. Inteligent Systems: Approximation by Artificial Neural Networks; Intelligent Systems Reference Library; Springer: Berlin/Heidelberg, Germany, 2011; Volume 19. [Google Scholar]
  8. Anastassiou, G.A. Univariate sigmoidal neural network approximation. Comput. Anal. Appl. 2012, 14, 659–690. [Google Scholar]
  9. Costarelli, D.; Spigler, R. Approximation results for neural network operators activated by sigmoidal functions. Neural Netw. 2013, 44, 101–106. [Google Scholar] [CrossRef] [PubMed]
  10. Costarelli, D.; Spigler, R. Multivariate neural network operators with sigmoidal activation functions. Neural Netw. 2013, 48, 72–77. [Google Scholar] [CrossRef] [PubMed]
  11. Kac, M.; Siegert, A.J.F. An explicit representation of a stationary Gaussian process. Ann. Math. Stat. 1947, 18, 438–442. [Google Scholar] [CrossRef]
  12. Kozachenko, Y.; Pogorilyak, O.; Rozora, I.; Tegza, A. Simulation of Stochastic Processes with Given Accuracy and Reliability; Elsevier: Amsterdam, The Netherlands, 2016; pp. 71–104. [Google Scholar]
  13. McCulloch, W.; Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 1943, 7, 115–133. [Google Scholar] [CrossRef]
  14. Mitchell, T.M. Machine Learning; WCB-McGraw-Hill: New York, NY, USA, 1997. [Google Scholar]
  15. Haykin, S. Neural Networks: A Comprehensive Foundation, 2nd ed.; Prentice Hall: New York, NY, USA, 1998. [Google Scholar]
  16. Anastassiou, G.A. Intelligent Systems II: Complete Approximation by Neural Network Operators; Springer: Berlin/Heidelberg, Germany; New York, NY, USA, 2016. [Google Scholar]
  17. Brauer, F.; Castillo-Chavez, C. Mathematical Models in Population Biology and Epidemiology; Springer: New York, NY, USA, 2001; pp. 8–9. [Google Scholar]
  18. Hritonenko, N.; Yatsenko, Y. Mathematical Modeling in Economics, Ecology and the Environment; Reprint; Science Press: Beijing, China, 2006. [Google Scholar]
  19. Fan, J.; Ma, C.; Zhong, Y. A selective overview of deep learning. Stat Sci. 2021, 36, 264–290. [Google Scholar] [CrossRef] [PubMed]
  20. Tambouratzis, T.; Souliou, D.; Chalikias, M.; Gregoriades, A. Combining probabilistic neural networks and decision trees for maximally accurate and efficient accident prediction. In Proceedings of the 2010 International Joint Conference on Neural Networks (IJCNN), Barcelona, Spain, 18–23 July 2010; pp. 1–8. [Google Scholar]
  21. Tambouratzis, T.; Canellidis, V.; Chalikias, M. Realizable and adaptive maximization of environmental sustainability at the country level using evolutionary strategies. Commun. Stat. Case Stud. Anal. Appl. 2021, 7, 590–623. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Anastassiou, G.A.; Kouloumpou, D. Neural Network Approximation for Time Splitting Random Functions. Mathematics 2023, 11, 2183. https://doi.org/10.3390/math11092183

AMA Style

Anastassiou GA, Kouloumpou D. Neural Network Approximation for Time Splitting Random Functions. Mathematics. 2023; 11(9):2183. https://doi.org/10.3390/math11092183

Chicago/Turabian Style

Anastassiou, George A., and Dimitra Kouloumpou. 2023. "Neural Network Approximation for Time Splitting Random Functions" Mathematics 11, no. 9: 2183. https://doi.org/10.3390/math11092183

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop