Next Article in Journal
A Safe-Region Imputation Method for Handling Medical Data with Missing Values
Next Article in Special Issue
A Numerical Method for Weakly Singular Nonlinear Volterra Integral Equations of the Second Kind
Previous Article in Journal
Asymmetric Scattering and Reciprocity in a Plasmonic Dimer
Previous Article in Special Issue
Detecting Optimal Leak Locations Using Homotopy Analysis Method for Isothermal Hydrogen-Natural Gas Mixture in an Inclined Pipeline
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Matrix Expression of Convolution and Its Generalized Continuous Form

by
Young Hee Geum
1,†,
Arjun Kumar Rathie
2,† and
Hwajoon Kim
3,*,†
1
Department of Applied Mathematics, Dankook University, Cheonan 31116, Korea
2
Department of Mathematics, Vedant College of Engineering & Technology (Rajasthan Technical University), Kota 324010, India
3
Department of IT Engineering, Kyungdong University, Yangju 11458, Korea
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Symmetry 2020, 12(11), 1791; https://doi.org/10.3390/sym12111791
Submission received: 30 September 2020 / Revised: 16 October 2020 / Accepted: 23 October 2020 / Published: 29 October 2020
(This article belongs to the Special Issue Integral Equations: Theories, Approximations and Applications)

Abstract

:
In this paper, we consider the matrix expression of convolution, and its generalized continuous form. The matrix expression of convolution is effectively applied in convolutional neural networks, and in this study, we correlate the concept of convolution in mathematics to that in convolutional neural network. Of course, convolution is a main process of deep learning, the learning method of deep neural networks, as a core technology. In addition to this, the generalized continuous form of convolution has been expressed as a new variant of Laplace-type transform that, encompasses almost all existing integral transforms. Finally, we would, in this paper, like to describe the theoretical contents as detailed as possible so that the paper may be self-contained.

1. Introduction

Deep learning means the learning of deep neural networks, called deep and if multiple hidden layers exist. Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction [1]. The convolution in convolutional deep neural network (CNN) is the tool for obtaining a feature map from the original image data, it sweeps the original image with a kernel matrix, and transforms the original data into a different shape. This distorted image is called a feature map. Therefore, in CNN, the convolution can be regarded as a tool that creates a feature map from the original image. Herein, the concept of convolution in artificial intelligence is demonstrated mathematically.
The core concept of CNN is the convolution which applies the weight to the receptive fields only, and it transforms the original data into a feature map. This process is called convolution. This is a similar principle to integral transform. The method of Integral transform maps from the original domain to another domain to solve a given problem more easily. Since the matrix expression of convolution is an essential concept in artificial intelligence, we believe that this study would certainly be meaningful. In addition to this, the generalized continuous form of convolution has also been studied, and thus this form is expressed as a new variant of Laplace-type transform.
On one hand, the transform theory is extensively utilized in fields involving medical diagnostic equipment, such as magnetic resonance imaging or computed tomography. Typically, a projection data are obtained by an integral transform, and an image using an inverse transform is produced. Although plausible integral transforms exist, almost all existing integral transforms are not sufficiently satisfied with fullness, and can be interpreted as a Laplace-type transform. One of us proposed a comprehensive form of the Laplace-type integral transform in [2]. The present study is being conducted to investigate the matrix expression of convolution and its generalized continuous form.
In [2], a Laplace-type integral transform was proposed, expressed as
G α ( f ) = G ( f ) = u α 0 e t u f ( t ) d t .
For values of α as 0, 1 , 1, and 2 , we have, respectively, the Laplace [3], Sumudu [4], Elzaki [5], and Mohand transforms [6]. This form can be expressed in various manners. Replacing t by u t , we have
G ( f ) = u β 0 e t f ( u t ) d t ,
where β = α + 1 . In the form, β values of 1, 0, 2, and 1 correspond to the Laplace, Sumudu, Elzaki, and Mohand transforms, respectively. If we substitute u = 1 / s in (1), we then obtain the simplest form of the generalized integral transform as follows:
G ( f ) = s γ 0 e s t f ( t ) d t ,
where γ = α . In this form, the Laplace, Sumudu, Elzaki, and Mohand transforms have γ values of 0, 1, 1 , and 2, respectively. It is somewhat paved, but essentially a simple way to derive the Sumudu transform is to multiply the Laplace transform by s. Similarly, it can be obtained multiply by s 1 to obtain the Elzaki transform, and multiply by s 2 to obtain the Mohand transform. The natural transform [7] can be obtained by substituting f ( t ) with f ( u t ) . Additionally, by substituting t = ln x , the Laplace-type transform G ( f ) can be expressed as
s α 1 f ( ln x ) x s 1 d x .
As a similar form, there is a Mellin transform [8] of the form
0 f ( x ) x s 1 d x .
As shown above, many integral transforms have their own fancy masks, but most of them can essentially be interpreted as Laplace-type transforms. From a different point of view, a slight change in the kernel results in a significant difference in the integral transform theory. Meanwhile, plausible transforms exist, such as the Fourier, Radon, and Mellin transforms. Typically, if the interval of integration and the power of kernel are different, it can be interpreted as a completely different transform. Studies using Laplace transform were conducted in [9,10]. The generalized solutions of the third-order Cauchy–Euler equation in the space of right-sided distributions has found [9], studied the solution of the heat equation without boundary conditions [10], and investigated further properties of Laplace-type transform [11]. As an application, a new class of Laplace-type integrals involving generalized hypergeometric functions has been studied [12,13]. As for research related to the integral equation, Noeiaghdam et al. [14] presented a new scheme based on the stochastic arithmetic. The scheme is presented to guarantee the validity and accuracy of the homotopy analysis method. Different kinds of integral equations such as singular and first kind are considered to find the optimal results by applying the proposed algorithms.
The main objective of this study is to investigate the matrix expression of convolution and its generalized continuous form. The generalized continuous form of the matrix expression was carried out in the form of a new variant of Laplace-type transform. The obtained result are as follows:
(1)
If the matrix representing the function (image) f is A and the matrix representing the function g is B, then the convolution f g is represented by the sum of all elements of A B and this is the same as t r ( A B T ) where ∘ is array multiplication, T is the transpose, and t r is the trace. Thus, the convolution in artificial intelligence (AI) is the same as t r ( A B T ) .
(2)
The generalized continuous form of the convolution in AI can be represented as
V ( f ) = Φ ( u ) 0 e t Δ f ( t ) d t ,
where Φ ( u ) is an arbitrary bounded function and
Δ = Δ ( δ , u ) = ln [ 1 + δ 1 u ] δ 1 .

2. Matrix Expression of Convolution in Convolutional Neural Network (CNN)

Note that functions can be interpreted as images in artificial intelligence (AI). The convolution is changed from 0 t f ( τ ) g ( t τ ) d τ to
τ = 0 t f ( τ ) g ( t τ )
by the discretization. The convolution in CNN is the tool for obtaining a feature map from the original image data, plays a role to sweeping the original image with kernel matrices (or filter), and it transforms original data into a different shape. In order to calculate the convolution, each n × n part of the original matrix is element-wise multiplied by the kernel matrix and all its components are added. Typically, the kernel matrix is using by 3 × 3 matrix. On the one hand, the pooling (or sub-sampling) is a simple job, reducing the size of the image made by convolution. It is the principle that the resolution is increased when the screen is reduced.
Let the matrix representing the function f is A and the matrix representing the function g is B. For two matrices A and B of the same dimension, the array multiplication (or sweeping) A B is given by
( A B ) i j = ( A ) i j ( B ) i j .
For example, the array multiplication for 2 × 2 matrices is
a b c d e f g h = a e b f c g d h .
The array multiplication appears in lossy compression such as joint photographic experts group and the decoding step. Let us look at an example.
Example 1.
In the classification field of AI, the pixel is treated as a matrix. When the original image is
1 2 3 0 0 0 1 2 1 0 3 0 1 0 1 1 0 2 1 0 0 1 1 2 0 ,
array multiplying the kernel matrix
2 0 1 0 1 2 2 0 1
on the first 3 × 3 matrix, we obtain the matrix
2 0 3 0 1 4 6 0 1 .
Now, adding all of its components, we obtain 17 . Next, if we array-multiply the kernel matrix to the 3 × 3 matrix
2 3 0 1 2 1 0 1 0
on the right and add all the components, we get 8 by stride 1 . If we continue this process to the final matrix
1 0 1 2 1 0 1 2 0 ,
we get 6 . Consequently, the original matrix changes to
17 8 10 8 5 10 12 8 6
by using the convolution kernel. This is called the convolved feature map.
This is just an example for understanding, and in perceptron the output uses a value between 1 and 1 using the activation function. Note that the perceptron is an artificial network designed to mimic the brain’s cognitive abilities. Therefore, the output of neuron (or node) Y can be represented as
Y = s i g n [ i = 1 n x i w i Θ ] = 1 X Θ 1 X < Θ ,
where w is a weight, Θ is the threshold value, and X is the activation function with X = i = 1 n x i w i . In the backpropagation algorithm of deep neural network, the sigmoid function
Y s i g m o i d = 1 1 + e x
is used as the activation function [15]. This function is easy to differentiate and ensures neuron output is in [ 0 , 1 ] . If max-pulling is applied to the above convolved feature map, the resulting matrix becomes 1 × 1 matrix ( 17 ) = 17 .
As discussed above, convolution in AI can be obtained by array multiplication. We would like to associate this definition with matrix multiplication in mathematics.
Definition 1.
(Convolution in AI) If the matrix representing the function (image) f is A and the matrix representing the function g is B, then the convolution f g is represented by the sum of all elements of A B and this is the same as t r ( A B T ) where ∘ is array multiplication, T is the transpose, and t r is the trace. Thus, the convolution in AI is the same as t r ( A B T ) , the sum of all elements on the diagonal with the right side facing down in A B T .
Typically, the convolution kernel is used as a 3 × 3 matrix, but for easy understanding, let us consider a 2 × 2 matrix.
Example 2.
If
A = a b c d
and
B = e f g h ,
then the convolution in AI is calculated as a e + b f + c g + d h by the sweeping. On the other hand,
A B T = a b c d e g f h = a e + b f a g + b h c e + d f c g + d h ,
and
t r ( A B T ) = a e + b f + c g + d h
for T is the transpose and t r is the trace. This is the same result as in AI.

3. Generalized Continuous Form of Matrix Expression of Convolution

If the matrix representing a function f is A and the matrix representing a function g is B, then the convolution of the functions f and g can be denoted by t r ( A B T ) . Intuitively, the diagonal part of B T corresponds to a graph of g ( t τ ) . The overlapping part of the graph can be interpreted as the concept of intersection, that is, the concept of multiplication. Thus, the generalized continuous form of the convolution in AI can be represented in a variant of Laplace-type transform given by
G α ( f ) = G ( f ) = u α 0 e t u f ( t ) d t
= u α 0 lim δ 1 ( 1 + δ 1 u ) t δ 1 f ( t ) d t .
If f ( t ) is a function defined for all t 0 , an integral of Laplace-type transform v i ( f ) is given by
F ( Δ ) = v i ( f ) = 0 e t Δ ( δ , u ) f ( t ) d t
= 0 [ 1 + δ 1 u ] t δ 1 f ( t ) d t
for δ > 1 with
Δ ( δ , u ) = ln [ 1 + δ 1 u ] δ 1 .
Additionally, let Φ ( u ) be an arbitrary bounded function and let V ( f ) be a variant of Laplace-type transform of f ( t ) . If f ( t ) is a function defined for all t 0 , V ( f ) is defined by
V ( f ) = Φ ( u ) 0 e t Δ f ( t ) d t
= Φ ( u ) 0 [ 1 + δ 1 u ] t δ 1 f ( t ) d t
for δ > 1 with Δ = Δ ( δ , u ) .
Based on the above two definitions, it is clear that the above variant of Laplace-type transform is represented as V ( f ) = Φ ( u ) · v i ( f ) for an arbitrary function Φ ( u ) . If so, let us see the relation with other integral transforms. Since
V ( f ) = Φ ( u ) 0 e t Δ f ( t ) d t ,
if δ 1 + and Φ ( u ) = u α , then it corresponds to the G α transform. When we take δ 1 , Φ ( u ) = 1 , and u = 1 / s , we get the Laplace transform. Similarly, when we take δ 1 , Φ ( u ) = u 1 ( δ 1 , Φ ( u ) = u ) , we get the Sumudu transform (Elzaki transform), respectively. In order to obtain a simple form of generalization, it is better to set ϕ ( u ) to u α for an arbitrary integer α . However, it is judged that ϕ ( u ) is better than u α as a suitable generalization, where ϕ ( u ) is a bounded arbitrary function. The reason is that ϕ ( u ) can express more integral transforms.
Lemma 1.
(Lebesgue dominated convergence theorem [16,17]). Let ( X , M , μ ) be a measure space and suppose { f n } is a sequence of extended real-valued measurable functions defined on X such that
(a) lim n f n ( x ) = f ( x ) exists μ-a.e.
(b) There is an integrable function g so that for each n, | f n | g μ-a.e.
Then, f is integrable and
lim n X f n d μ = X f d μ .
Beppo Livi’s theorem is a special form of Lemma 1. Its contents are as follows:
n = 1 g n d μ = n = 1 g n d μ
for ( g n ) is a nondecreasing sequence. The details are can be found on page 71 in [16]. Note that the convolution of f and g is given by
( f g ) ( t ) = 0 f ( τ ) g ( t τ ) d τ .
The following theorem is as follows. Since the proof is not difficult, we would like to cover just a few.
Theorem 1.
(1) 
(Duality with Laplace transform) If £ ( f ) = F ( s ) is the Laplace transform of a function f ( t ) , then it satisfies the relation of V ( f ) = Φ ( u ) · F ( Δ ) .
(2) 
(Shifting theorem) If f ( t ) has the transform F ( u ) , then e a t f ( t ) has the transform Φ ( u ) · F ( Δ a ) . That is,
V [ e a t f ( t ) ] = Φ ( u ) · F ( Δ a ) .
Moreover, If f ( t ) has the transform F ( u ) , then the shifted function f ( t a ) h ( t a ) has the transform e a Δ · Φ ( u ) F ( Δ ) . In formula,
V [ f ( t a ) h ( t a ) ] = e a Δ · Φ ( u ) F ( Δ )
for h ( t a ) is Heaviside function (We write h since we need u to denote u-space).
(3) 
(Linearity) Let V ( f ) be the variant of Laplace-type transform. Then V ( f ) is a linear operation.
(4) 
(Existence) If f ( t ) is defined, piecewise continuous on every finite interval on the semi-axis t 0 and satisfies
| f ( t ) | M e k t
for all t 0 and some constants M and k, then the variant of Laplace-type transform V ( f ) exists for all Δ > k .
(5) 
(Uniqueness) If the variant of Laplace-type transform of a given function exists, then it is uniquely determined.
(6) 
(Heaviside function)
v i [ h ( t a ) ] = 0 e t Δ h ( t a ) d t = a e t Δ · 1 d t
= e a Δ / Δ ,
where h is Heaviside function.
(7) 
(Dirac’s delta function) We consider the function
f k ( t a ) = 1 / k i f a t a + k 0 o t h e r w i s e .
In a similar way to Heaviside, taking the integral of Laplace-type transform, we get
v i [ f k ( t a ) ] = 0 e t Δ f k ( t a ) d t = 1 k Δ [ e t Δ ] a a + k
= 1 k Δ ( e ( a + k ) Δ e a Δ ) = 1 k Δ e a Δ ( e k Δ 1 ) .
If we denote the limit of f k as δ ( t a ) , then
v i ( δ ( t a ) ) = lim k 0 ( v i ) [ f k ( t a ) ] = e a Δ .
(8) 
(Shifted data problems) For a given differential equation y + a y + b y = r ( t ) subject to y ( t 0 ) = c 0 and y ( t 0 ) = c 1 , where t 0 0 and a and b are constant, we can set t = t 1 + t 0 . Then t = t 0 gives t 1 = 0 and so, we have
y 1 + a y 1 + b y 1 = r ( t 1 + t 0 ) , y 1 ( 0 ) = c 0 , y 1 ( 0 ) = c 1
for input r ( t ) . Taking the variant, we can obtain the output y ( t ) .
(9) 
(Transforms of derivatives and integrals) Let a function f is n-th differentiable and integrable, and let us consider the fraction Δ as an operator. Then V ( f ) of the n-th derivatives of f ( t ) satisfies
v i ( f ( n ) ) = Δ n v i ( f ) k = 1 n Δ n k f ( k 1 ) ( 0 )
and
V [ 0 t f ( τ ) d τ ] = Φ ( u ) · 1 Δ V ( f ) .
(10) 
(Convolution) If two functions f and g are integrable for * is the convolution, then V ( f g ) satisfies
V ( f g ) = Φ ( u ) · F ( Δ ) G ( Δ )
for V ( f ) = F ( Δ ) .
Proof. 
(5) Assume that V ( f ) exists by V ( f 1 ) and V ( f 2 ) both. If V ( f 1 ) V ( f 2 ) for f 1 = f 2 , then
V ( f 1 ) V ( f 2 ) = Φ ( u ) 0 e t Δ f 1 ( t ) d t Φ ( u ) 0 e t Δ f 2 ( t ) d t = Φ ( u ) 0 e t Δ ( f 1 ( t ) f 2 ( t ) ) d t = V ( f 1 f 2 ) = 0 .
This is a contradiction on V ( f 1 ) V ( f 2 ) , and hence the transform is uniquely determined. Conversely, if two functions f 1 and f 2 have the same transform (i.e., if V ( f 1 ) = V ( f 2 ) ), then
V ( f 1 ) V ( f 2 ) = Φ ( u ) 0 e t Δ ( f 1 ( t ) f 2 ( t ) ) d t = 0 ,
and so f 1 = f 2 a.e. Hence f 1 = f 2 excepting for the set of measure zero.
(9) Note that v i ( f ) = 0 e t Δ f ( t ) d t , and let us approach the proof by induction. In case of n = 1 ,
v i ( f ) = 0 e t Δ f ( t ) d t .
Integrating by parts, we have
v i ( f ) = [ e t Δ f ( t ) ] 0 + Δ 0 e t Δ f ( t ) d t
= f ( 0 ) + Δ v i ( f ) .
which is true by (2).
Next, let us suppose that n = m is valid for some m. Thus,
v i ( f ( m ) ) = Δ m v i ( f ) k = 1 m Δ m k f ( k 1 ) ( 0 )
holds for f ( m ) is the m-th derivative of f. Let us show that
v i ( f ( m + 1 ) ) = Δ m + 1 v i ( f ) k = 1 m + 1 Δ m + 1 k f ( k 1 ) ( 0 ) .
Now we start with the left-hand side of (2).
v i ( f ( m + 1 ) ) = 0 e t Δ f ( m ) ( t ) d t
= Δ v i ( f ( m ) ) f ( m ) ( 0 )
= Δ [ Δ m v i ( f ) k = 1 m Δ m k f ( k 1 ) ( 0 ) ] f ( m ) ( 0 )
= Δ m + 1 v i ( f ) k = 1 m Δ m + 1 k f ( k 1 ) ( 0 ) ] f ( m ) ( 0 )
= Δ m + 1 v i ( f ) k = 1 m + 1 Δ m + 1 k f ( k 1 ) ( 0 ) .
Therefore, this theorem is valid for an arbitrary natural number n. Putting g ( t ) = 0 t f ( τ ) d τ ,
v i ( f ( t ) ) = v i ( g ( t ) ) = Δ v i ( g ) g ( 0 ) = Δ v i ( g )
follows.  □
As the direct results of (9), v i ( f ) = Δ v i ( f ) f ( 0 ) and v i ( f ) = Δ 2 v i ( f ) Δ f ( 0 ) f ( 0 ) are follow.
For example, we consider y y = t subject to y ( 0 ) = 1 and y ( 0 ) = 1 . Taking the integral of Laplace-type transform on both sides, we have
Δ 2 Y Δ y ( 0 ) y ( 0 ) Y = 1 / Δ 2
for Y = ( v i ) ( y ) . Organizing this equation, we get ( Δ 2 1 ) Y = Δ + 1 + 1 / Δ 2 . Simplification gives
Y = 1 Δ 1 + 1 Δ 2 1 1 Δ 2 .
From the relation of V ( f ) = Φ ( u ) · F ( Δ ) , we have the solution
y ( t ) = t + 2 sin h t + cos h t = e t + sin h t t ,
where h is hyperbolic function.
Example 3.
(Integral equations of Volterra type) Find the solution of
( 1 ) y ( t ) + 0 t ( t τ ) y ( τ ) d τ = 1 .
( 2 ) y ( t ) 0 t y ( τ ) sin ( t τ ) d τ = t .
( 3 ) y ( t ) 0 t ( 1 + τ ) y ( t τ ) d τ = 1 sinh t .
Solution.
(1)
Since this equation is y + y t = 1 , taking the integral of Laplace-type transform on both sides, we have
Y + ( Y · 1 Δ 2 ) = 1 Δ
for Y = v i ( y ) . Thus
Y = Δ Δ 2 + 1
and so, we obtain the solution y = cos t .
Let us do the check by expansion. Expanding, we get y ( t ) + y ( t ) = 0 . Since a a f = 0 , we get y ( 0 ) = 1 and y ( 0 ) = 0 . Thus, we obtain y = cos t .
(2)
This is rewritten as a convolution
y ( t ) y sin t = t .
Taking the integral of Laplace-type transform, we have
Y ( u ) Y ( u ) 1 Δ 2 + 1 = Y ( u ) ( 1 1 Δ 2 + 1 ) = 1 Δ 2
for Y = v i ( y ) . The solution is
Y ( u ) = 1 Δ 2 + 1 Δ 4
and gives the answer
y ( t ) = t + 1 6 t 3 .
(3)
Note that the equation is the same as y ( 1 + t ) y = 1 sinh t . Taking the transform, we get
Y ( 1 Δ + 1 Δ 2 ) Y = 1 Δ 1 Δ 2 1 ,
and hence
Y ( 1 1 Δ 1 Δ 2 ) = Δ 2 Δ 1 Δ ( Δ 2 1 ) .
Simplification gives
Y = Δ Δ 2 1 ,
and so, we obtain the answer
y ( t ) = cosh t
by the relation of V ( f ) = Φ ( u ) · F ( Δ ) .
Let us turn the topic to initial value problem of the convolution. The initial value problem
a y + b y + c y = f ( t ) , y ( 0 ) = y 0 , y ( 0 ) = y 0
gives
( a Δ 2 + b Δ + c ) Y ( Δ ) ( a Δ + b ) y ( 0 ) a y ( 0 ) = F ( Δ ) ,
where Y ( Δ ) = v i ( y ) and F ( Δ ) = v i ( f ) . Simplification gives
Y ( Δ ) = 1 a Δ 2 + b Δ + c · F ( Δ ) + y 0 · a Δ + b a Δ 2 + b Δ + c
+ y 0 · a a Δ 2 + b Δ + c .
If we put the system function H ( Δ ) = ( a Δ 2 + b Δ + c ) 1 , then
Y ( Δ ) = H ( Δ ) F ( Δ ) + y 0 ( a Δ + b ) H ( Δ ) + y 0 a H ( Δ ) .
Since H ( Δ ) F ( Δ ) = v i ( h ) v i ( f ) = v i ( h f ) for H ( Δ ) = v i ( h ( t ) ) , taking the inverse transform, we have
y = ( h f ) + y 0 v i 1 { ( a Δ + b ) H ( Δ ) }
+ y 0 v i 1 { a H ( Δ ) } .
Theorem 2.
(Differentiation and integration of transforms) Let us put V = F ( Δ ) and Y = V ( y ) . Then
( 1 ) V ( t n f ( t ) ) = Φ ( u ) · ( 1 ) n d n d s n F ( Δ ) ( 2 ) V ( f ( t ) / t ) = Φ ( u ) Δ F ( δ ) d δ .
Proof. 
This is an immediate consequence of V ( f ) = Φ ( u ) · F ( Δ ) and V ( f ) = Φ ( u ) · v i ( f ) . For this reason, detailed proofs are omitted.
The statements below are the immediate results of Theorem 2.
( 1 ) V ( t f ( t ) ) = Φ ( u ) F ( Δ ) ( 2 ) V ( t f ( t ) ) = Φ ( u ) ( Y + Δ d Y d Δ ) ( 3 ) V ( t f ( t ) ) = Φ ( u ) · ( 2 Δ Y Δ 2 d Y d Δ + y ( 0 ) )
Let us check examples for temperature in an infinite bar and displacement in a semi-infinite string by the variant of Laplace-type transform.  □
Example 4.
(Semi-infinite string) Find the displacement w ( x , t ) of an elastic string subject to the following conditions [3].
(a) 
The string is initially at rest on the x-axis from x = 0 to ∞.
(b) 
For t > 0 the left end of the string is moved in a given fashion, namely, according to a single sine wave
w ( 0 , t ) = f ( t ) = s i n 2 t i f 0 t π 0 o t h e r w i s e .
(c) 
Furthermore, w ( x , t ) 0 as x for t 0 .
Then the displacement w is
w ( x , t ) = f ( t x c ) h ( t x c ) = sin 2 ( t x c ) i f x / c < t < x / c + π 0 o t h e r w i s e ,
where h is Heaviside function.
The proof is simple, and the interchangeability of limit and integral in the proof process guarantees its validity by the Lebesgue dominated convergence theorem.
Example 5.
(Temperature in an infinite bar) Find the temperature w in an infinite bar if the initial temperature is
f ( x ) = w ( x , 0 ) = k 0 ( c o n s t a n t ) i f | x | < 1 0 o t h e r w i s e
with w ( 0 , t ) = 0 .
Solution. Taking the integral of Laplace-type transform on both sides of w t = c 2 w x x , we have
Δ F w ( x , 0 ) = c 2 2 F x 2
for F ( x , u ) = v i [ w ( x , t ) ] . Organizing the equality, we get
2 F x 2 Δ c 2 F = 1 c 2 w ( x , 0 ) .
Organizing this equality, we get
F ( x , u ) = A ( u ) e Δ x / c + B ( u ) e Δ x / c
+ e Δ x / c c 2 Δ e Δ x / c 1 c 2 w ( x , 0 ) d x
e Δ x / c c 2 Δ e Δ x / c 1 c 2 w ( x , 0 ) d x ,
where the Wronskian W = 2 Δ / c . The value lim x f ( x ) = 0 gives lim x F ( x , u ) = 0 , and hence B ( u ) = 0 . Thus, from ( 4 ) , we get
F ( x , u ) = A ( u ) e Δ x / c
+ w ( x , 0 ) 2 c Δ ( e Δ x / c e Δ x / c d x e Δ x / c e Δ x / c d x ) .
By the direct calculation, we have
F ( x , u ) = A ( u ) e Δ x / c + w ( x , 0 ) Δ .
From the formula of v i ( f ) = F ( Δ ) for F ( s ) = £ ( f ) and s = Δ , we know
v i ( k 2 π t 3 e k 2 4 t ) = e k Δ ( k > 0 )
and v i ( 1 ) = 1 / Δ . Taking the inverse transform, we obtain the temperature w ( x , t ) as follows:
w ( x , t ) = A ( t ) x 2 c π t 3 e x 2 4 c 2 t + k 0
on | x | < 1 , and * is the convolution. In case of | x | > 1 , we have the solution
w ( x , t ) = A ( t ) x 2 c π t 3 e x 2 4 c 2 t .
In the above equality, we note that
v i 1 [ A ( u ) e Δ x / c ] = v i 1 [ v i ( A ( t ) ) · v i ( x 2 c π t 3 e x 2 4 c 2 t ) ]
= v i 1 [ v i { A ( t ) x 2 c π t 3 e x 2 4 c 2 t } ] = A ( t ) x 2 c π t 3 e x 2 4 c 2 t
because v i ( f g ) = F ( Δ ) g ( Δ ) for v i ( f ) = F ( Δ ) .

4. Conclusions

In this study, the concept of convolution in convolutional neural networks (CNN) was presented mathematically and tried to connect with the concept of convolution in mathematics. As a continuous form of convolution in CNN, a new form of Laplace-type transform has been proposed. In the future, we will study the change of convolution in CNN by changing the stride. In addition to this, we shall also explore the possibility of our applying our newly defined Laplace-type transform in obtaining certain new and interesting results involving generalized hypergeometric functions that would certainly unify and generalized the results available in the literature and may be potentially useful from an applications point of view.

Author Contributions

Conceptualization, A.K.R.; validation, Y.H.G.; and writing, H.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The corresponding author (H.K.) acknowledges the support of the Kyungdong University Research Fund, 2021. The authors are also grateful to the anonymous referees whose valuable suggestions and comments significantly improved the quality of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521. [Google Scholar] [CrossRef] [PubMed]
  2. Kim, H. The intrinsic structure and properties of Laplace-typed integral transforms. Math. Probl. Eng. 2017, 2017, 1–8. [Google Scholar] [CrossRef]
  3. Kreyszig, E. Advanced Engineering Mathematics; Wiley: Singapore, 2013. [Google Scholar]
  4. Watugula, G.K. Sumudu Transform: A new integral transform to solve differential equations and control engineering problems. Integr. Educ. 1993, 24, 35–43. [Google Scholar] [CrossRef]
  5. Elzaki, T.M.; Ezaki, S.M.; Hilal, E.M.A. ELzaki and Sumudu Transform for Solving some Differential Equations. Glob. J. Pure Appl. Math. 2012, 8, 167–173. [Google Scholar]
  6. Mohand, M.; Mahgoub, A. The New Integral Transform ’Mohand Transform. Adv. Theor. Appl. Math. 2017, 12, 113–120. [Google Scholar]
  7. Belgacem, F.B.M.; Silambarasan, R. Theory of natural transform. Math. Eng. Sci. Aerosp. 2012, 3, 105–135. [Google Scholar]
  8. Bertrand, J.; Bertrand, P.; Ovarlez, J.P. The Mellin Transform, The Transforms and Applications; Poularkas, A.D., Ed.; CRC Press: Boca Raton, FL, USA, 1996. [Google Scholar]
  9. Jhanthanam, S.; Nonlaopon, K.; Orankitjaroen, S. Generalized Solutions of the Third-Order Cauchy-Euler Equation in the Space of Right-Sided Distributions via Laplace Transform. Mathematics 2019, 7, 376. [Google Scholar] [CrossRef] [Green Version]
  10. Kim, H. The solution of the heat equation without boundary conditions. Dyn. Syst. Appl. 2018, 27, 653–662. [Google Scholar]
  11. Supaknaree, S.; Nonlaopon, K.; Kim, H. Further properties of Laplace-type integral transforms. Dyn. Syst. Appl. 2019, 28, 195–215. [Google Scholar]
  12. Koepf, W.; Kim, I.; Rathie, A.K. On a New Class of Laplace-Type Integrals Involving Generalized Hypergeometric Functions. Axioms 2019, 8, 87. [Google Scholar] [CrossRef] [Green Version]
  13. Sung, T.; Kim, I.; Rathie, A.K. On a new class of Eulerian’s type integrals involving generalized hypergeometric functions. Aust. J. Math. Anal. Appl. 2019, 16, 1–15. [Google Scholar]
  14. Noeiaghdam, S.; Fariborzi Araghi, M.A.; Abbasbandy, S. Finding optimal convergence control parameter in the homotopy analysis method to solve integral equations based on the stochastic arithmetic. Numer. Algorithms 2019, 81. [Google Scholar] [CrossRef]
  15. Negnevitsky, M. Artificial Intelligence; Addison-Wesley: Essex, England, 2005. [Google Scholar]
  16. Cohn, D.L. Measure Theory; Birkhäuser: Boston, MA, USA, 1980. [Google Scholar]
  17. Jang, J.; Kim, H. An application of monotone convergence theorem in PDEs and Fourier analysis. Far East J. Math. Sci. 2015, 98, 665–669. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Geum, Y.H.; Rathie, A.K.; Kim, H. Matrix Expression of Convolution and Its Generalized Continuous Form. Symmetry 2020, 12, 1791. https://doi.org/10.3390/sym12111791

AMA Style

Geum YH, Rathie AK, Kim H. Matrix Expression of Convolution and Its Generalized Continuous Form. Symmetry. 2020; 12(11):1791. https://doi.org/10.3390/sym12111791

Chicago/Turabian Style

Geum, Young Hee, Arjun Kumar Rathie, and Hwajoon Kim. 2020. "Matrix Expression of Convolution and Its Generalized Continuous Form" Symmetry 12, no. 11: 1791. https://doi.org/10.3390/sym12111791

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop