Next Article in Journal
Complexity of the Yellowstone Park Volcanic Field Seismicity in Terms of Tsallis Entropy
Next Article in Special Issue
On Quantum Superstatistics and the Critical Behavior of Nonextensive Ideal Bose Gases
Previous Article in Journal
Fractional Form of a Chaotic Map without Fixed Points: Chaos, Entropy and Control
Previous Article in Special Issue
Hedging for the Regime-Switching Price Model Based on Non-Extensive Statistical Mechanics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Analytic Study of Complex Fractional Tsallis’ Entropy with Applications in CNNs

by
Rabha W. Ibrahim
1,*,† and
Maslina Darus
2,†
1
Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur 50603, Malaysia
2
School of Mathematical Sciences, Faculty of Sciences and Technology, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia
*
Author to whom correspondence should be addressed.
Both authors contributed equally to this work.
Entropy 2018, 20(10), 722; https://doi.org/10.3390/e20100722
Submission received: 29 July 2018 / Revised: 5 September 2018 / Accepted: 10 September 2018 / Published: 20 September 2018
(This article belongs to the Special Issue Nonadditive Entropies and Complex Systems)

Abstract

:
In this paper, we study Tsallis’ fractional entropy (TFE) in a complex domain by applying the definition of the complex probability functions. We study the upper and lower bounds of TFE based on some special functions. Moreover, applications in complex neural networks (CNNs) are illustrated to recognize the accuracy of CNNs.

1. Introduction

A strategic amount in information theory is entropy. Entropy measures the amount of uncertainty appearing in the assessment of a random variable or the outcome of a random process. In 1988, Tsallis [1] presented the nonadditive entropy, aiming at a generalization of Boltzmann–Gibbs (BG) statistical mechanics. The purpose of this generalization is to study complex systems. Its applications appeared in many fields, such as thermodynamics, chaos, artificial neural networks, image processing, complex systems, information theory, etc. (see [2,3,4,5,6,7,8,9,10,11,12,13,14,15]).
The scheme of the axioms of probability theory placed in 1933 by Kolmogorov can be extended to include the imaginary set of numbers and this by accumulation to his original five axioms. Later, an additional three axioms were given in [16]. Consequently, the complex probability domain is defined by the sum of the real set S R with its corresponding real probability and the imaginary S M with its corresponding imaginary probability. In general, the advantages of complex probability theory are that it is considered a supplementary dimension (imaginary part) to the event appearing in the real dimension laboratory (real part). It represents physical quantities of complex networks in terms of currents, complex potentials and impedance. Moreover, it fulfills luck and chance in S R substituted by total determinism in a complex domain. Finally, it extends many well-known concepts of the traditional probability theory, such as expectation and variance, to the complex probability theory with more accuracy in applications. One of the important applications of complex probability theory is in realistic quantum mechanics [17]; for example, the two slit experiment where a source releases a single particle, which moves to a wall with two slits and is spotted at position χ on a shelter placed behind the wall. The typical argument that an interference design on the shelter infers that the particle did not either drive through one slit or the other is ultimately an argument in probability theory such that P ( χ ) = P ( χ ) 1 + P ( χ ) 2 , where P ( χ ) 1 and P ( χ ) 2 are the probability via the first and second slit, respectively, which is a critical process. This process leads to the use of complex probability theory.
Recently, Abou Jaoude [18] extended Shannon’s information theory by using the complex probability. The author calculated the magnitude of the chaotic factor, the channel capacities in the probability and the degree of knowledge. In general, complex probability leads to better information for all processes compared to the classical probability [19,20]. Figure 1 shows the relation between complex analysis and information theory.
Our investigation is based on the concept of complex probability to extend the idea of Tsallis’ fractional entropy (TFE). The study of the technique delivered by using the approximation theory of special functions of complex variables was useful in information theory. We introduce the upper and lower bound of TFE. Sharpness is discussed as well in the sequel.

2. Results

Let A be an event in a complex domain S C . The real and imaginary terms of the complex probability function (CPF):
P c ( z ) = P r ( x , y ) + P m ( x , y )
where the argument z = x + i y , and P r and P m are the real probability and the imaginary probability in the real set S R and imaginary set S M , respectively. Following Axiom 7 in [16], we have:
P c ( z ) = P r ( x , y ) + i ( 1 P r ( x , y ) ) ,
such that z = x + i y with | z | 2 = P r 2 + ( P m / i ) 2 and P m = i ( 1 P r ) ; hence, P c is always equal to one. Abou Jaoude et al. [19] inferred that z U = { z C : | z | < 1 } (the open unit disk).
Tsallis presented an entropic formalization characterized by an index γ , which implies a non-extensive statistics. TFE ( T γ ) is the basis of the so-known non-extensive statistical mechanics, which modifies the Boltzmann–Gibbs theory. Tsallis statistics has been used in various fields such as applied mathematics, physics, biology, chemistry, computer science, information theory, engineering, medicine, economics, business, geophysics, etc. Since we study the analytic properties of TFE, therefore, we focus on the continuous formula. The general continuous form of this entropy is given by:
T γ [ P ] = 1 γ 1 1 x ( P ( x ) ) γ d x , γ 1 .
By applying the concept of CPF in Equation (1), we extend TFE into complex values as follows (CTFE):
T γ [ P c ] = 1 γ 1 1 S C P c ( z ) γ d z .
For a special domain S C = U , we have:
T γ [ P c ] U = 1 γ 1 1 U P c ( z ) γ d z .
For the analytic study, we shall use the definition:
T γ ( z ) : = ( γ 1 ) T γ [ P c ] U = 1 0 z P c ( w ) γ d w , z U ,
where P c is analytic in U, having the form:
P c ( z ) = n = 0 p n z n , z U .
It is clear that T γ ( 0 ) = 1 and ( T γ ) > 0 .
TFE has been maximized by using different techniques depending on its parameter γ . This problem was discussed in [1,2] for real power index γ and in [21] for the complex power index. The authors showed that the Tsallis distribution reserves its fractional power formula, decorating with some specific log-periodic oscillations (convergence dynamics of z-logistic maps). As a result, the authors introduced a complex measure of the thermal bath heat capacity C = 1 / ( γ 1 ) . Thus, in general, the heat capacity becomes complex as well. In this work, CTFE approximates some special functions in a complex domain. These functions are popular in various applications.
Next, we approximate Equation (4) for some special functions. The advantageous of the approximation are: First, for recognizing target functions, the approximation technique studies how certain known functions (for example, special functions) can be approximated by a definite class of functions (for example, polynomials or rational functions) that often have desirable properties (inexpensive computation, continuity, integral and limit values, etc.). Second, the target function, call it Ψ , may be unknown; instead of a clear formula, only a set of points of the form ( x , Ψ ( x ) ) is delivered. Depending on the organization of the domain and codomain of Ψ , several methods for approximating g may be applicable. For example, if Ψ is an operation on the complex numbers, the techniques of geometric function theory can be used.

2.1. Bernoulli Function [ z / ( e z 1 ) ] γ

Mocanu [22] showed that the function z / ( e z 1 ) is convex in U (see Figure 2).
The function is not convex when γ 2 (see Figure 3).
Series expansions at z = 0 , γ = 2 , are given as follows:
T 2 ( z ) = 1 z + ( 5 z 2 ) / 12 z 3 / 12 + z 4 / 240 + O ( z 5 ) T 3 ( z ) = 1 ( 3 z ) / 2 + z 2 ( 3 z 3 ) / 8 + ( 19 z 4 ) / 240 + O ( z 5 ) T 4 ( z ) = 1 2 z + ( 11 z 2 ) / 6 z 3 + ( 251 z 4 ) / 720 + O ( z 5 )
Moreover, when 0 < γ < 1 , we have:
T 0 . 5 ( z ) = 1 z 4 + z 2 96 + z 3 384 z 4 10240 +
For φ ( z ) = φ n z n and υ ( z ) = υ n z n , υ n 0 for all n 0 , we have φ υ if and only if | φ n | υ n . Note that this concept is called majorization coefficients.
We have the following properties (upper bounds):
Proposition 1.
For CTFE approximated by Bernoulli function,
T γ ( z ) 1 + z 1 z γ , γ > 0 , γ 1 .
Proof. 
Let:
ψ ( z , γ ) = 1 + z 1 z γ , z U , γ 1 .
Then, we obtain:
ψ ( z , 2 ) = 1 + n = 1 ( 4 n ) z n = 1 + 4 z + 8 z 2 + 12 z 3 + 16 z 4 + 20 z 5 + ψ ( z , 3 ) = 1 + n = 1 ( 2 + 4 n 2 ) z n = 1 + 6 z + 18 z 2 + 38 z 3 + . ψ ( z , 4 ) = 1 + n = 1 1 3 ( 8 n ( 2 + n 2 ) ) z n = 1 + 8 z + 16 z 2 + 24 z 3 + .
Furthermore, for 0 < γ < 1 , we have:
ψ ( z , 0 . 5 ) = 1 + z + z 2 2 + z 3 2 + 3 z 4 8 + 3 z 5 8 + .
Comparing Equation (5) and Equation (6), we conclude that T γ ( z ) is majorized by the function 1 + z 1 z γ for all γ 1 . ☐
Proposition 2.
For CTFE approximated by Bernoulli function, there is a probability measure μ on ( U ) 2 , for all γ > 1 .
Proof. 
Let t , τ U ; then, we have:
1 + t z 1 + τ z γ = ( 1 + t z ) γ 1 + τ z . 1 ( 1 + τ z ) γ 1 ( 1 + z ) γ 1 z . 1 ( 1 z ) γ 1 = 1 + z 1 z γ , γ > 1 .
In view of Theorem 1.11 in [23], the 1 + t z 1 + τ z γ admits a probability measure μ in ( U ) 2 satisfying:
f ( z ) = ( U ) 2 1 + t z 1 + τ z γ d μ ( t , τ ) , z U .
Then, by virtue of Proposition 1, there is a constant λ (diffusion constant) such that:
( U ) 2 1 + t z 1 + τ z γ d μ ( t , τ ) = λ ( U ) 2 t z e τ z 1 γ d μ ( t , τ ) , z U .
This completes the proof. ☐

2.2. Gaussian Function Φ ( a , c ; z )

The function Φ ( a , c ; z ) is defined by the series:
Φ ( a , c ; z ) = Γ ( c ) Γ ( a ) n = 0 Γ ( a + n ) Γ ( c + n ) z n n ! .
A special case of this function is Φ ( a , a ; z ) = e z . We consider CTFE approximated by e γ z . Clearly, we have the following results:
Proposition 3.
For CTFE approximated by e γ z :
T γ ( z ) Φ ( a , c ; z ) ,
γ > 0 , γ 1 , a > 1 , c > 1 .
Proposition 4.
For CTFE approximated by e γ z , there is a probability measure μ on [0, 1].
Proof. 
In view of Equation (1.2-8) in [24], there is a probability measure on [0, 1] such that:
Φ ( a , c ; z ) = Γ ( c ) Γ ( a ) Γ ( c a ) 0 1 τ a 1 ( 1 τ ) c a 1 e τ z d t = 0 1 e τ z d μ
d μ ( τ ) = Γ ( c ) Γ ( a ) Γ ( c a ) τ a 1 ( 1 τ ) c a 1 d t .
By Proposition 3, we have the desired assertion. ☐

2.3. Fractional Sigmoid Function FSF

CTFE can be approximated by FSF. In our investigation, we focus on the type of function, which is analytic in U. We suggest the function (see Figure 4):
T γ ( z ) = 2 1 + e γ z , γ 1 , z U .
The expansion CTFE are given as follows:
T 2 ( z ) = 1 + z z 3 3 + 2 z 5 15 17 z 7 315 + O ( z 9 ) T 3 ( z ) = 1 + 3 z 2 9 z 3 8 + 81 z 5 80 1413 z 7 4480 + O ( z 9 ) T 4 ( z ) = 1 + 2 z 8 3 z 3 + 64 15 z 5 217 315 z 7 + O ( z 9 )
For sufficient values of a and c , CTFE approximated by FSF can be majorized by Φ ( a , c ; z ) .

3. Complex-Valued Neural Networks

CNNs are a necessary extension of the analysis of real-valued neural networks. CNNs are networks that utilize complex-valued variables and parameters, effectively distributing in this style with complex-valued information. They are very well matched with wave phenomena, and they are suitable for the procedures connected with complex altitude [25]. The have been used for a long list of applications, essentially in learning tasks, loss function, cost function, utility function and combinatorial optimization.
In CNNs, the neurons in each layer are systematized as a three-dimensional array rather than as a vector in ANNs (artificial neural networks). The first two dimensions are titled spatial, and the third is a partition to networks. The CNN system charts three ideologies characteristic of natural systems: locality, sharing and pooling.
The locality behavior is the information that neurons depend only on their neighbors, rather than on far away neurons. Sharing is the limitation that various pi neurons should undergo the same processing. It is challenging that an affine layer follows locality, and sharing results in a convolution layer. Pooling is used to indicate invariance to small translations. A pooling layer does so by splitting each input channel into patches and replacing each patch with a single representative assessment in the output layer.
Suppose the CNN is delivered by n fully connected in a Hopfield-like net. The output is given by a complex number for each neuron:
Z = { z 1 , , z n } U .
Thus, the network state (information of the net) I γ ( z k ) , k = 1 , , n is a complex vector. In this work, we shall use the total information, which is given by the relation:
I γ ( z ) = k = 1 n T γ ( z k ) γ 1 , γ 1 , z Z ,
where T γ is approximated by Equation (9). Therefore, a large amount of information can be realized from both theoretical study and numerical computations from T γ ( z ) . The stability of Equation (10) is given by the energy equation:
E γ = I γ ( z ) I ¯ γ ( z ) n ,
where I ¯ γ ( z ) is the conjugate of I γ ( z ) . The energy provides a tool for studying the dynamics of CNNs. Figure 5 shows the steps of finding the energy. The minimum energy is bounded by the value ρ , which is suggested during the training of CNN.
It has been shown by experiences, for a CNN of four neurons, that the minimum energy is satisfying Equation (11) for the output on U as follows:
Z = { i , i , 1 , 1 } .
The energy E γ is equal to one for all values γ > 2 ; while the energy is increasing for outcomes inside the unit disk U . For example, the output set:
Z = { 1 + i 2 , 1 i 2 , i 2 , i 2 }
has energy E γ > 1 , for different values of γ .

Numerical Examples

Let Z = { i , i , 1 , 1 } be the outcome set of CNN. To apply our algorithm, we pursue the following steps:
Step 1. Calculate T γ , γ > 2 from Equation (8) as follows: for γ = 3 , we have:
T 3 ( i ) = 2 1 + e 3 i = 1 + 14 . 1 i , T 3 ( i ) = 2 1 + e 3 i = 1 14 . 1 i , T 3 ( 1 ) = 2 1 + e 3 = 1 . 9 , T 3 ( 1 ) = 2 1 + e 3 = 0 . 094 ;
Step 2. Compute the total information by using Equation (10):
I 3 ( z ) = k = 1 4 T 3 ( z k ) 3 1 2 .
Step 3. Estimate the energy of CNN by applying Equation (11):
E 3 = I 3 ( z ) I ¯ 3 ( z ) 4 = 1 .
Remark 1.
One can show that for all γ > 2 , the estimate energy for the set Z = { i , i , 1 , 1 } is equal to one. The algorithm will stop at the value ρ , which was given previously. In our example, we consider ρ = 1 for all z U ¯ .
Moreover, to estimate the energy of the outcomes set Z = { 1 + i 2 , 1 i 2 , i 2 , i 2 } , we follow the above steps:
T 3 ( z ) = 5 . 6 , I 3 ( z ) = 2 . 8 , E 3 = 1 . 96 .
Comparing with ρ = 1 , the CNN needs more training.
Remark 2.
Comparing with the complex Shannon entropy [18], we obtain the following values for the set Z = { i , i , 1 , 1 } :
H ( i ) 1 . 0010005 0 . 999999499 i , H ( i ) 1 . 0010005 + 0 . 999999499 i , H ( 1 ) = 0 , H ( 1 ) = 1 .
This implies total information I ( z ) = 3 . Consequently, we have E = 9 / 4 = 2 . 25 > 1 .

4. Discussion

  • Equation (10) refers to the amount of information in the complex system, which is given in the CNN. The advantage is that CNN does not depend on the number of neurons to get full training of the system (see [11,12,13,14,15,26]). Furthermore, the complex value of the output converges to the stability state faster than the real value. All the complex value outputs are given in the open unit disk U , where | z | < 1 (see [16]). In this case, we may use the properties of geometry function theory (GFT). For example, the sigmoid function of the complex value is studied widely in view of GFT. The convexity and other geometric representations of this function have been studied by many authors (see [27]).
  • The parameter γ from I γ is: the simplest non-trivial perturbation of any unperturbed complex system; the complex system (CNN) in which obvious necessary and sufficient conditions are recognized for a small divisor problem is stable.
  • The output may cause a complex-valued function incited by the set Z . In this situation, the stability comes from the first derivative of I ( z ) with respect to z. This type of stability is called Lyapunov stability. At a fixed point z 0 :
    I γ ( z 0 ) = d d z I γ ( z 0 ) = 2 z 0 .
    At a periodic point z 0 of period , the first derivative of a function:
    ( I γ ) ( z 0 ) = d d z I γ ( z 0 ) = i = 0 1 I γ ( z i ) = 2 i = 0 1 z i = λ
    is usually given by λ and represented by the multiplier or the Lyapunov characteristic number. It applies to checking the stability of periodic points, as well as fixed points ( λ = 0 ).
  • At a non-periodic point, the derivative, z n , can be iterated by:
    z 0 = 1 ; z n = 2 × z n 1 × z n 1 .
  • The above derivative can be replaced by any derivative for a complex variable z C such as the Schwarzian derivative. We may suggest this as a future work.
  • Derivative with respect to γ (parametric derivative): This type of derivative is called the distance estimation method. In this case, CNN has one output in the set Z , and it is fixed. Therefore, we suggest to use the parameter plane collecting information. This occurs as follows: On the parameter plane: γ is a variable, and z 0 = 0 is constant. The first derivative of I γ n ( z 0 ) with respect to γ is given by the relation:
    z n = d d γ I γ n ( z 0 ) .
    This derivative can be defined by the following iteration:
    z 0 = d d γ I γ 0 ( z 0 ) = 1
    and then replacing at every consecutive step:
    z n + 1 = d d γ I γ n + 1 ( z 0 ) = 2 · I γ n ( z ) · d d γ I γ n ( z 0 ) + 1 = 2 · z n · z n + 1 .

5. Conclusions and Future Research

In the present paper, we have been applying the model of complex probability to Tsallis’ entropy. Henceforth, we established a fitted connection between the new model and the classical FTE. Therefore, we developed the theory of information. As an application, we made a generalization of CNNs; its result implied minimization of the energy in this complex system. The aid of extending FTE leads to very stimulating and successful consequences and outcomes illustrated in this work. Therefore, we are calling this original and beneficial new study in applied mathematics and analytics: “the theory of complex information”.
It is intended that additional development of this original study will be done in subsequent work such as convergence, convexity and concavity. It is proposed that in future research studies, the novel planned analytic method will be elaborated more, and the complex probability model, as well as extensive and various sets of stochastic processes will be applied.

Author Contributions

Conceptualization, R.W.I. and M.D.; methodology, R.W.I.; software, R.W.I.; validation, R.W.I. and M.D.; formal analysis, R.W.I. and M.D.; investigation, R.W.I. and M.D.; writing—original draft preparation, R.W.I.; writing—review and editing, M.D.; funding acquisition, M.D.

Funding

This research was funded by Universiti Kebangsaan Malaysia grant number GUP-2017-064.

Acknowledgments

The authors would like to express their thanks to the reviewers for their important and useful comments to improve the paper. The work here is partially supported by the Universiti Kebangsaan Malaysia grant: GUP (Geran Universiti Penyelidikan)-2017-064.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
γ parameter
λ diffusion constant
zcomplex number
P c complex probability
S R real set of events
S M imaginary set of events
P r probability in the real set
P m probability in the imaginary set
Uthe open unit disk
| z | 2 the degree of our knowledge of the random experiment; it is the square of the norm of z
T γ [ P c ] CFTE
( T γ ) the real part of CFTE
Φ ( a , c ; z ) Gaussian function
Γ ( . ) gamma function
I γ total information
E γ the energy
ρ the upper bound of energy

References

  1. Tsallis, C. Possible generalization of Boltzmann-Gibbs statistics. J. Stat. Phys. 1988, 52, 479–487. [Google Scholar] [CrossRef]
  2. Tsallis, C. The nonadditive entropy Sq and its applications in physics and elsewhere: Some remarks. Entropy 2011, 13, 1765–1804. [Google Scholar] [CrossRef]
  3. Ibrahim, R.W.; Jalab, H.A. Existence of entropy solutions for nonsymmetric fractional systems. Entropy 2014, 16, 4911–4922. [Google Scholar] [CrossRef]
  4. Ibrahim, R.W.; Jalab, H.A. Existence of Ulam stability for iterative fractional differential equations based on fractional entropy. Entropy 2015, 17, 3172–3181. [Google Scholar] [CrossRef]
  5. Ibrahim, R.W.; Jalab, H.A.; Gani, A. Cloud entropy management system involving a fractional power. Entropy 2015, 18, 14. [Google Scholar] [CrossRef]
  6. Ibrahim, R.W.; Jalab, H.A.; Gani, A. Perturbation of fractional multi-agent systems in cloud entropy computing. Entropy 2016, 18, 31. [Google Scholar] [CrossRef]
  7. Jalab, H.A.; Ibrahim, R.W.; Amr, A. Image denoising algorithm based on the convolution of fractional Tsallis entropy with the Riesz fractional derivative. Neural Comput. Appl. 2017, 28, 217–223. [Google Scholar] [CrossRef]
  8. Ibrahim, R.W. The maximum principle of Tsallis entropy in a complex domain. Ital. J. Pure Appl. Math. 2017, 601–606. [Google Scholar]
  9. Ibrahim, R.W. On new classes of analytic functions imposed via the fractional entropy integral operator. Facta Univ. Ser. Math. Inform. 2017, 32, 293–302. [Google Scholar] [CrossRef]
  10. Al-Shamasneh, A.A.R.; Jalab, H.A.; Palaiahnakote, S.; Obaidellah, U.H.; Ibrahim, R.W.; El-Melegy, M.T. A new local fractional entropy-based model for kidney MRI image enhancement. Entropy 2018, 20, 344. [Google Scholar] [CrossRef]
  11. Rubio, J.D.J.; Lughofer, E.; Plamen, A.; Novoa, J.F.; Meda-Campaña, J.A. A novel algorithm for the modeling of complex processes. Kybernetika 2018, 54, 79–95. [Google Scholar] [CrossRef]
  12. Meda, C.; Jesus, A. On the estimation and control of nonlinear systems with parametric uncertainties and noisy outputs. IEEE Access 2018, 6, 31968–31973. [Google Scholar] [CrossRef]
  13. Rubio, J. Error convergence analysis of the SUFIN and CSUFIN. Appl. Soft Comput. 2018, in press. [Google Scholar]
  14. Meda, C.; Jesus, A. Estimation of complex systems with parametric uncertainties using a JSSF heuristically adjusted. IEEE Lat. Am. Trans. 2018, 16, 350–357. [Google Scholar] [CrossRef]
  15. De Jesús Rubio, J.; Lughofer, E.; Meda-Campaña, J.A.; Páramo, L.A.; Novoa, J.F.; Pacheco, J. Neural network updating via argument Kalman filter for modeling of Takagi-Sugeno fuzzy models. J. Intell. Fuzzy Syst. 2018, 35, 2585–2596. [Google Scholar] [CrossRef]
  16. Abou Jaoude, A. The paradigm of complex probability and Chebyshev’s inequality. Syst. Sci. Control Eng. 2016, 4, 99–137. [Google Scholar] [CrossRef]
  17. Youssef, S. Quantum mechanics as Bayesian complex probability theory. Mod. Phys. Lett. A 1994, 9, 2571–2586. [Google Scholar] [CrossRef]
  18. Abou Jaoude, A. The paradigm of complex probability and Claude Shannon’s information theory. Syst. Sci. Control Eng. 2017, 5, 380–425. [Google Scholar] [CrossRef] [Green Version]
  19. Abou Jaoude, A.; El-Tawil, K.; Seifedine, K. Prediction in complex dimension using Kolmogorov’s set of axioms. J. Math. Stat. 2010, 6, 116–124. [Google Scholar] [CrossRef]
  20. Abou Jaoude, A. The complex probability paradigm and analytic linear prognostic for vehicle suspension systems. Am. J. Eng. Appl. Sci. 2015, 8, 147. [Google Scholar] [CrossRef]
  21. Wilk, G.; Włodarczyk, Z. Tsallis distribution with complex nonextensivity parameter q. Phys. A Stat. Mech. Its Appl. 2014, 413, 53–58. [Google Scholar] [CrossRef]
  22. Mocanu, P.T. Convexity of some particular functions. Studia Univ. Babes-Bolyai Math. 1984, 29, 70–73. [Google Scholar]
  23. Ruscheweyh, S. Convolutions in Geometric Function Theory; Presses de l’Université de Montréal: Montréal, QC, Canada, 1982. [Google Scholar]
  24. Miller, S.S.; Mocanu, P.T. Differential Subordinations: Theory and Applications; CRC Press: Boca Raton, FL, USA, 2000. [Google Scholar]
  25. Kaslik, E.; Ileana, R.R. Dynamics of complex-valued fractional-order neural networks. Neural Netw. 2017, 89, 39–49. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Ibrahim, R.W. The fractional differential polynomial neural network for approximation of functions. Entropy 2013, 15, 4188–4198. [Google Scholar] [CrossRef]
  27. Ezeafulukwe, U.A.; Darus, M.; Olubunmi, A. On analytic properties of a sigmoid function. Int. J. Math. Comput. Sci. 2018, 13, 171–178. [Google Scholar]
Figure 1. The connection of the main objectives of this research.
Figure 1. The connection of the main objectives of this research.
Entropy 20 00722 g001
Figure 2. Bernoulli function z / ( e z 1 ) .
Figure 2. Bernoulli function z / ( e z 1 ) .
Entropy 20 00722 g002
Figure 3. Bernoulli function [ z / ( e z 1 ) ] 2 .
Figure 3. Bernoulli function [ z / ( e z 1 ) ] 2 .
Entropy 20 00722 g003
Figure 4. Sigmoid function 2 1 + e γ z , γ = 2 .
Figure 4. Sigmoid function 2 1 + e γ z , γ = 2 .
Entropy 20 00722 g004
Figure 5. The algorithm of using CTFE in CNNs.
Figure 5. The algorithm of using CTFE in CNNs.
Entropy 20 00722 g005

Share and Cite

MDPI and ACS Style

Ibrahim, R.W.; Darus, M. Analytic Study of Complex Fractional Tsallis’ Entropy with Applications in CNNs. Entropy 2018, 20, 722. https://doi.org/10.3390/e20100722

AMA Style

Ibrahim RW, Darus M. Analytic Study of Complex Fractional Tsallis’ Entropy with Applications in CNNs. Entropy. 2018; 20(10):722. https://doi.org/10.3390/e20100722

Chicago/Turabian Style

Ibrahim, Rabha W., and Maslina Darus. 2018. "Analytic Study of Complex Fractional Tsallis’ Entropy with Applications in CNNs" Entropy 20, no. 10: 722. https://doi.org/10.3390/e20100722

APA Style

Ibrahim, R. W., & Darus, M. (2018). Analytic Study of Complex Fractional Tsallis’ Entropy with Applications in CNNs. Entropy, 20(10), 722. https://doi.org/10.3390/e20100722

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop