Next Article in Journal
Study of Uniqueness and Ulam-Type Stability of Abstract Hadamard Fractional Differential Equations of Sobolev Type via Resolvent Operators
Previous Article in Journal
On the Exact Solution of a Scalar Differential Equation via a Simple Analytical Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Entropy of Difference: A New Tool for Measuring Complexity

Physics Department, Université Libre de Bruxelles, 50 av F. D. Roosevelt, 1050 Bruxelles, Belgium
*
Author to whom correspondence should be addressed.
Axioms 2024, 13(2), 130; https://doi.org/10.3390/axioms13020130
Submission received: 23 November 2023 / Revised: 9 February 2024 / Accepted: 10 February 2024 / Published: 19 February 2024
(This article belongs to the Section Mathematical Physics)

Abstract

:
We propose a new tool for estimating the complexity of a time series: the entropy of difference (ED). The method is based solely on the sign of the difference between neighboring values in a time series. This makes it possible to describe the signal as efficiently as prior proposed parameters, such as permutation entropy (PE) or modified permutation entropy (mPE). Firstly, this method reduces the size of the sample that is necessary to estimate the parameter value, and secondly it enables the use of the Kullback–Leibler divergence to estimate the “distance” between the time series data and random signals.
PACS:
05.45.-a; 05.45.Tp; 05.45.Pq; 89.75.-k; 87.85.Ng
MSC:
65P20; 65Z05; 91B70

1. Introduction

Permutation entropy (PE), introduced by Bandt and Pompe [1], as well as its modified version [2], are both efficient tools for measuring the complexity of chaotic time series. Both methods propose to analyze time series X = ( x 1 , x 2 , x k ) by first choosing an embedding dimension m to split the original data in a subset of m-tuples, ( ( x 1 , x 2 x m ) , ( x 2 , x 3 , x 1 + m ) , ) , and to then substitute the m-tuples values by the rank of the values, resulting in a new symbolic representation of the time series. For example, consider the time series X = ( 0.2 , 0.1 , 0.6 , 0.4 , 0.1 , 0.2 , 0.4 , 0.8 , 0.5 , 1 , 0.3 , 0.1 , ) . Choosing, for example, an embedding dimension m = 4 , will split the data in a set of 4-tuples: X 4 = ( ( 0.2 , 0.1 , 0.6 , 0.4 ) , ( 0.1 , 0.6 , 0.4 , 0.1 ) , ( 0.6 , 0.4 , 0.1 , 0.2 ) , ) . The Bandt–Pompe method will associate the rank of the value with each 4-tuple. Thus, in ( 0.2 , 0.1 , 0.6 , 0.4 ) , the lowest element 0.1 is in Position 2, the second element 0.2 is in Position 1, 0.4 is in Position 4, and finally 0.6 is in Position 3. Thus, the 4-tuple ( 0.2 , 0.1 , 0.6 , 0.4 ) is rewritten as ( 2 , 1 , 4 , 3 ) . This procedure thus results in each X 4 being rewritten as a symbolic list: ( ( 2 , 1 , 4 , 3 ) , ( 1 , 4 , 3 , 2 ) , ( 3 , 4 , 2 , 1 ) ) . Each element is then a permutation π of the set ( 1 , 2 , 3 , 4 ) . Next, the probability of each permutation π in X m is then computed, p m ( π ) , and finally, the PE for the embedding dimension m is defined as PE m ( X ) = π p m ( π ) log ( p m ( π ) ) . The modified permutation entropy (mPE) just deals with those cases in which equal quantities may appear in the m-tuples. For example, for the m-tuple ( 0.1 , 0.6 , 0.4 , 0.1 ) , computing PE will produce ( 1 , 4 , 3 , 2 ) , while computing mPE will associate ( 1 , 1 , 3 , 2 ) . Both methods are widely used due to their conceptual and computational simplicity [3,4,5,6,7,8]. For random signals, PE leads to a constant probability q m ( π ) = 1 / m ! (for white Gaussian noise), which does not make it possible to evaluate the “distance” between the probability found in the signal, p m ( π ) , and the probability produced by a random signal, q m , with the Kullback–Leibler (KL) divergence [9,10]: KL m ( p q ) = π p m ( π ) log 2 ( p m ( π ) / q m ( π ) ) . Furthermore, the number K m of m-tuples is m ! for PE and even greater for mPE [2], thus requiring then a large data sample to perform a significant statistical estimation of p m .

2. The Entropy of Difference Method

The entropy of difference (ED) method proposes to substitute the m-tuples with strings s containing the sign (“+” or “−”), representing the difference between subsequent elements in the m-tuples. For the same X 4 , ( ( 0.2 , 0.1 , 0.6 , 0.4 ) , ( 0.1 , 0.6 , 0.4 , 0.1 ) , ( 0.6 , 0.4 , 0.1 , 0.2 ) , ) this leads to the representation (“− + −”, “+ − −”, “− − +”, ⋯). For an m value, we have 2 m 1 strings s from “+ + + ⋯ +” to “− − − ⋯ −”. Again, we compute, in the time series, the probability distribution q m ( s ) of these strings s and define the entropy of difference of order m as ED m = s q m ( s ) log 2 q m ( s ) . The number of elements, K m , to be treated, for an embedding m, is smaller for ED compared with the number of permutations π in PE or to the elements in mPE (see Table 1).
Furthermore, the probability distribution for a string s, in a random signal, q m ( s ) , is not constant and could be computed through the recursive equation. Indeed, let P ( x X t x + d x ) = p ( x ) d x be the probability density for the signal variable X t at time t, and let F ( x ) be the corresponding cumulative distribution function ( F ( x ) = x p ( x ) d x ). Consider the hypothesis that the signal is not correlated in time, which means that the join probability is only the product of the probability P ( x 1 X t 1 x 1 + d x 1 , x 2 X t 2 x 2 + d x 1 ) = P ( x 1 X t x 1 + d x 1 ) P ( x 2 X t x 2 + d x 2 ) . Under these conditions, we can easily evaluate the q m ( s ) . For example, m = 3 . We therefore have three data x 1 , x 2 , x 3 , and we can have 4 possibilities. q 3 ( + , + ) is the probability of having x 3 > x 2 > x 1 , q 3 ( + , ) is that of having x 2 > x 1 and x 2 > x 3 , q 3 ( , + ) is that of having x 1 > x 2 and x 3 > x 2 , and finally q 3 ( , ) is that of having x 1 > x 2 > x 3 . Using the Heaviside step function θ ( x ) , θ ( x ) = 1 if x 0 , and θ ( x ) = 0 if x < 0 , and noting the cumulative distribution function F ( x ) = P ( X x ) , we can evaluate q 3 ( + , + ) :
q 3 ( + , + ) = d x 1 d x 2 d x 3 p ( x 1 ) p ( x 2 ) p ( x 3 ) θ ( x 3 x 2 ) θ ( x 2 x 1 ) = d x 3 p ( x 3 ) x 3 d x 2 p ( x 2 ) x 2 d x 1 p ( x 1 ) = d x 3 p ( x 3 ) x 3 d x 2 p ( x 2 ) F ( x 2 ) = d x 3 p ( x 3 ) 1 2 F ( x 3 ) 2 = 1 6
For q 3 ( , + ) , we need to integrate θ ( x 1 x 2 ) θ ( x 3 x 2 ) . Using the obvious θ ( x 1 x 2 ) = 1 θ ( x 2 x 1 ) , we have
q 3 ( , + ) = d x 1 d x 2 d x 3 p ( x 1 ) p ( x 2 ) p ( x 3 ) θ ( x 3 x 2 ) θ ( x 1 x 2 ) = d x 2 d x 3 p ( x 2 ) p ( x 3 ) θ ( x 3 x 2 ) q 3 ( + , + ) = d x 3 p ( x 3 ) F ( x 3 ) 1 6 = d x 3 F ( x 3 ) F ( x 3 ) 1 6 = 1 2 1 6 = 2 6
This result is totally independent of the probability density p ( x ) provided that the signal is not correlated in time. We can proceed in the same way for any q m ( s ) and thus obtain a recurrence on q m ( s ) (see Appendix A) (in the following equations, x and y are strings made of “+” and “−”):
q 2 ( + ) = q 2 ( ) = 1 2 q m + 1 ( + , + , + , , + m ) = 1 ( m + 1 ) ! q m + 1 ( , x ) = q m ( x ) q m + 1 ( + , x ) q m + 1 ( x , ) = q m ( x ) q m + 1 ( x , + ) q m + 1 ( x , , y ) = q a + 1 ( x ) q b + 1 ( y ) q m + 1 ( x , + , y ) with a + b + 1 = m
leading to a complex probability distribution for q m ( s ) . For example, for m = 9 , we have 2 8 = 256 strings with the highest probability for the “+ − + − + − + −” string (and its symmetric “− + − + − + − +”): q 9 ( max ) = 62 2835 0.02187 (see Figure 1). These probabilities q m ( s ) could then be used to determine the KL-divergence between the time series probability p m ( s ) and the random uncorrelated signal.
To each string s, we can associate an integer number, its binary representation, through the substitutions 0 and + 1 . Therefore, for m = 4 , we have “− − −” = 0, “− − +” = 1, “− + −” = 2, “− + +” = 3, and so on, up to “+ + +” = 7 (see Table 2 and Table 3).
The recurrence gives some specific q m . To simplify the notations, we can write a + , a set of a successive “+”. For example, the second and third rules gives
q m + 1 ( a + , ) = q m ( a + ) q m + 1 ( a + , + ) = 1 m ! 1 ( m + 1 ) ! q m + 1 ( a + , ) = q m + 1 ( , a + ) = m ( m + 1 ) !
then
q m + 1 ( a + , , b + ) = q a + 1 ( a + ) q b + 1 ( b + ) q m + 1 ( a + , + , b + ) = q m + 1 ( a + , , b + ) = 1 ( m + 1 ) ! C a + 1 m + 1 1 with : b + a + 1 = m
We can also write
q m + 1 ( a + , , b + , , c + ) = q a + 1 ( a + ) q b + c + 2 ( b + , , c + ) q m + 1 ( a + , + , b + , , c + ) = a + b + c + 2 = m q m + 1 ( a + , , b + , , c + ) = 1 ( a + 1 ) ! 1 ( m a ) ! C b + 1 m a 1 1 ( m + 1 ) ! C m c m + 1 1 = = 1 ( a + 1 ) ! ( b + 1 ) ! ( c + 1 ) ! 1 ( c + 1 ) ! ( a + b + 2 ) ! 1 ( a + 1 ) ! ( b + c + 2 ) ! + 1 ( a + b + c + 3 ) !
This equation is also valid when b = 0 and thus for q m + 1 ( a + , , , c + ) (with m = a + c + 2 ) or for c = 0 . We can continue in this way and determine the general values of q m + 1 ( a + , , b + , , c + , , d + ) and so on.
In the case, where the data are integers, we can avoid the situation where two successive data are equal ( x i = x i + 1 ) by adding a small amount of random noise. For example, we take the first 10 4 decimal of π (and we add a small amount of noise ϵ [ 0.01 , 0.01 ] ), and we have the following (See Table 4 and Table 5):
Despite the complexity of q m ( s ) , the Shannon entropy for a random signal, ED m = s q m ( s ) log 2 q m ( s ) , increases linearly with m (see Figure 2): ED m = 0.799574 + 0.905206 m . If the m-tuples are equiprobable, it will lead to log 2 ( 2 ) + m log 2 ( 2 ) = m 1 .

3. Periodic Signal

We will now see what happens with a period 3 data X = ( x 1 , x 2 , x 3 , x 1 , x 2 , x 3 , ) . To evaluate q m , we only have 3 types of 2-tuples. For example, for q 2 , we have ( ( x 1 , x 2 ) , ( x 2 , x 3 ) , and ( x 3 , x 1 ) ) . We have only two possible strings, “+” or “−”, so the probabilities must be q 2 ( + ) = 2 / 3 , q 2 ( ) = 1 / 3 or q 2 ( + ) = 1 / 3 , q 2 ( ) = 2 / 3 . For q 3 , again we have only 3 types of 3-tuples: ( ( x 1 , x 2 , x 3 ) , ( x 2 , x 3 , x 1 ) , and ( x 3 , x 1 , x 2 ) ) . We have 2 2 possible strings ( + , + ) , ( + , ) , ( , + ) , and ( , ) . The consistency of the inequalities between x 1 , x 2 , and x 3 reduces the number of possible strings to 3. For example, if ( x 1 , x 2 , x 3 ) gives ( + , + ) , then ( x 2 , x 3 , x 1 ) must be ( + , ) , and ( x 3 , x 1 , x 2 ) must be ( , + ) . Due to Period 3, these ( x , y ) values will appear 1 / 3 times. To evaluate q 4 , we have again only 3 types of 4-tuples: ( ( x 1 , x 2 , x 3 , x 1 ) , ( x 2 , x 3 , x 1 , x 2 ) , and ( x 3 , x 1 , x 2 , x 3 ) ) , and again these will appear 1 / 3 times in the data. This reasoning can be generalized to a signal of period p, q p = 1 / p ; consequently, ED p = log 2 ( p ) , and this remains constant for m p . Obviously, since we are only using the differences between the x i ’s, the periodicity in terms of signs x i + 1 x i may be smaller than the periodicity p of the data, so ED p log 2 ( p ) .

4. Chaotic Logistic Map Example

Let us illustrate the use of ED on the well known logistic map [11] Lo ( x , λ ) driven by the parameter λ .
x n + 1 = Lo ( x n , λ ) = λ x n ( 1 x n )
It is obvious that, for a range of values of λ , where the time series reaches a periodic behavior (any cyclic oscillation between n different values), the ED will remain constant. The evaluation of the ED could thus be used as a new complexity parameter to determine the behavior of the time series (see Figure 3).
For λ = 4 , we know that the data are randomly distributed with a probability density given by [12]
p Lo ( x ) = 1 π ( 1 x ) x
However, the logistic map produces correlations in the data, so we expect a deviation from the uncorrelated random q m .
We can then compute exactly the ED for an m-embedding as well as the KL-divergence from a random signal. For example, for m = 2 , we can determine q 2 Lo ( + ) and q 2 Lo ( ) by solving the inequality x < Lo ( x ) and x > Lo ( x ) , respectively, which implies that 0 < x < 3 / 4 and 3 / 4 < x < 1 . Then,
q 2 Lo ( + ) = 0 3 / 4 d x p Lo ( x ) = 2 3 q 2 Lo ( ) = 3 / 4 1 d x p Lo ( x ) = 1 3
In this case, the logistic map produces a signal that contains twice as many increasing pairs “+” than decreasing pairs “−”. Thus,
ED 2 = ( 2 3 log 2 2 3 + 1 3 log 2 1 3 ) = log 2 3 2 2 / 3 0.918 KL 2 = 1 3 log 2 32 27 0.082
For m = 3 , we can perform the same calculation. We have, respectively,
x 1 < x 2 < x 3 ( + , + ) : 0 < x < 1 4 x 1 < x 3 < x 2 ( + , ) : 1 4 < x < 1 8 ( 5 5 ) x 3 < x 1 < x 2 ( + , ) : 1 8 ( 5 5 ) < x < 3 4 x 2 < x 1 < x 3 ( , + ) : 3 4 < x < 1 8 ( 5 + 5 ) x 2 < x 3 < x 1 ( , + ) : 1 8 ( 5 + 5 ) < x < 1
Graphically we have:
q 3 Lo ( + + ) = 1 3 q 3 Lo ( + ) = 1 3 q 3 Lo ( + ) = 1 3 q 3 Lo ( ) = 0 ED 3 = log 2 3 1.58 KL 3 = 1 3 0.33
Effectively, the logistic map with λ = 4 forbids the string “ ” where x 1 > x 2 > x 3 . For strings of length 3, we have
q 4 Lo ( + + + ) = q 4 Lo ( + + ) = q 4 Lo ( + + ) = q 4 Lo ( + ) = 1 6 q 4 Lo ( + + ) = 2 6 ED 4 = log 2 108 1 3 2.25 KL 4 = log 2 16 , 384 1125 1 / 6 0.64
The probability of difference q m ( s ) for some string length m versus s, the string binary value, where “+” 1 and “−” 0 , gives us the “spectrum of difference” for the distribution q (see Figure 4 and Figure 5).

5. KLm(p|q) Divergences Versus m on Real Data and on Maps

The manner in which the KL m ( p | q ) evolves with m is another parameter of the complexity measure. KL m ( p | q ) measures the loss of information when the random distribution q m is used to predict the distribution p m . Increasing m introduces more bits of information in the signal, and the behavior versus m shows how the data diverge from a random distribution.
The graphics (see Figure 6) show the behavior of KL m versus m for two different chaotic maps and for real financial data [14]: the opening value of the nasdaq100, bel20 every day from 2000 to 2013. For maps, the logarithmic map x n + 1 = ln ( a | x n | ) and the logistic map are shown (see Figure 6 for the logarithmic map (Figure 7)).
For maps, the simulation starts with a random number between 0 and 1 and first iterates 500 times to avoid transients. Starting with these seeds, 720 iterates were kept, and KL m was computed. It can be seen that the Kullback–Leibler divergence from the logistic map at λ = 4 to the random signal is fitted by a quadratic function of m: KL m = 0.4260 + 0.2326 m + 0.0095 m 2 (p-value 2 × 10 7 for all parameters), while the logarithmic map behavior is linear in the range a [ 0.4 , 2.2 ] . Financial data are also quadratic KL m ( nasdaq ) = 0.1824 0.0973 m + 0.0178 m 2 , KL m ( bel 20 ) = 0.1587 0.0886 m + 0.0182 m 2 with a higher curvature than the logistic map due to the fact that the spectrum of the probability p m is compatible with a constant distribution (see Figure 6), rendering the prediction of an increase or decrease signal completely random, which is not the case in any true random signal (See Figure 8 and Figure 9).

6. Conclusions

The simple property of increases or decreases in a signal makes it possible to introduce the entropy of difference ED m as a new efficient complexity measure for chaotic time series. This new technique is numerically fast and easy to implement. It does not require complex signal processing and could replace the evaluation of the Lyapunov exponent (which is far more time-consuming). For a random signal, we have determined the value of ED m , which is independent of the probability of distribution of this signal. This makes it possible to calculate the “distance” between the analyzed signal and a random signal (independent of its distribution probability). As “distance”, we evaluate the Kullback–Leibler divergence versus the number of data m used to build the difference string. This KL m shows different behavior for different types of signal and can also be used also to characterize the complexity of a time series. Since the only assumption for a random signal is that it is uncorrelated, this method makes it possible to determine the correlated nature of signals, even in chaotic regimes.

Author Contributions

Conceptualization and methodology, formal analysis, P.N.; writing, review and editing, P.N. and G.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

The Mathematica program for the probability q ( s ) :
P["+"]= P["-"] = 1/2;
P["-", x__] := P[x] - P["+", x];
P[x__, "-"] := P[x] - P[x, "+"];
P[x__, "-", y__] := P[x] P[y] - P[x, "+", y];
P[x__] :=1/(StringLength[StringJoin[x]] + 1)!

References

  1. Bandt, C.; Pompe, B. Permutation Entropy: A Natural Complexity Measure for Time Series. Phys. Rev. Lett. 2002, 88, 174102. [Google Scholar] [CrossRef]
  2. Bian, C.; Qin, C.; Ma, Q.D.Y.; Shen, Q. Modified permutation-entropy analysis of heartbeat dynamics. Phys. Rev. E 2012, 85, 021906. [Google Scholar] [CrossRef]
  3. Zunino, L.; Pérez, D.G.; Martín, M.T.; Garavaglia, M.; Plastino, A.; Rosso, O.A. Permutation entropy of fractional Brownian motion and fractional Gaussian noise. Phys. Lett. A 2008, 372, 4768. [Google Scholar] [CrossRef]
  4. Li, X.; Ouyang, G.; Richard, D.A. Predictability analysis of absence seizures with permutation entropy. Epilepsy Res. 2007, 77, 70. [Google Scholar] [CrossRef]
  5. Li, X.; Cui, S.; Voss, L.J. Using permutation entropy to measure the electroencephalographic effects of sevoflurane. Anesthesiology 2008, 109, 448. [Google Scholar] [CrossRef] [PubMed]
  6. Frank, B.; Pompe, B.; Schneider, U.; Hoyer, D. Permutation entropy improves fetal behavioural state classification based on heart rate analysis from biomagnetic recordings in near term fetuses. Med. Biol. Eng. Comput. 2006, 44, 179. [Google Scholar] [CrossRef] [PubMed]
  7. Olofsen, E.; Sleigh, J.W.; Dahan, A. Permutation entropy of the electroencephalogram: A measure of anaesthetic drug effect. Br. J. Anaesth. 2008, 101, 810. [Google Scholar] [CrossRef] [PubMed]
  8. Rosso, O.A.; Zunino, L.; Perez, D.G.; Figliola, A.; Larrondo, H.A.; Garavaglia, M.; Martin, M.T.; Plastino, A. Extracting features of Gaussian self-similar stochastic processes via the Bandt–Pompe approach. Phys. Rev. E 2007, 76, 061114. [Google Scholar] [CrossRef] [PubMed]
  9. Kullback, S.; Leibler, R.A. On Information and Sufficiency. Ann. Math. Statist. 1951, 22, 79. [Google Scholar] [CrossRef]
  10. Roldán, E.; Parrondo, J.M.R. Entropy production and Kullback–Leibler divergence between stationary trajectories of discrete systems. Phys. Rev. E 2012, 85, 031129. [Google Scholar] [CrossRef] [PubMed]
  11. May, R.M. Simple mathematical models with very complicated dynamics. Nature 1976, 261, 459. [Google Scholar] [CrossRef] [PubMed]
  12. Jakobson, M. Absolutely continuous invariant measures for one-parameter families of one-dimensional maps. Commun. Math. Phys. 1981, 81, 39–88. [Google Scholar] [CrossRef]
  13. Ginelli, F.; Poggi, P.; Turchi, A.; Chate, H.; Livi, R.; Politi, A. Characterizing Dynamics with Covariant Lyapunov Vectors. Phys. Rev. Lett. 2007, 99, 130601. [Google Scholar] [CrossRef] [PubMed]
  14. Available online: http://www.wessa.net/ (accessed on 1 February 2024).
Figure 1. The 2 8 values for the probability of q 9 ( s ) , from s = 0 to s = + + + 255 .
Figure 1. The 2 8 values for the probability of q 9 ( s ) , from s = 0 to s = + + + 255 .
Axioms 13 00130 g001
Figure 2. The 2 8 values for the probability of q 9 ( s ) , for the π decimal (blue), and for a random distribution (red).
Figure 2. The 2 8 values for the probability of q 9 ( s ) , for the π decimal (blue), and for a random distribution (red).
Axioms 13 00130 g002
Figure 3. The Shannon entropy of q m ( s ) : ED m increases linearly with m, and the fit 0.799574 + 0.905206 m gives a sum of squared residuals of 1.7 × 10 4 and a p-value = 1.57 × 10 12 and 1.62 × 10 30 on the fit parameter respectively.
Figure 3. The Shannon entropy of q m ( s ) : ED m increases linearly with m, and the fit 0.799574 + 0.905206 m gives a sum of squared residuals of 1.7 × 10 4 and a p-value = 1.57 × 10 12 and 1.62 × 10 30 on the fit parameter respectively.
Axioms 13 00130 g003
Figure 4. The ED 13 (strings of length 12) is plotted versus λ , with the bifurcation diagram, and the value of the Lyapunov exponent, respectively [13]. The constant value appears when the logistic map enters into a periodic regime.
Figure 4. The ED 13 (strings of length 12) is plotted versus λ , with the bifurcation diagram, and the value of the Lyapunov exponent, respectively [13]. The constant value appears when the logistic map enters into a periodic regime.
Axioms 13 00130 g004
Figure 5. From x 1 (blue), the first iteration of the logistic map (gray) gives x 2 , and the second iteration (black) gives x 2 . The respective positions of x 1 , x 2 , x 3 allow us to determine q 3 .
Figure 5. From x 1 (blue), the first iteration of the logistic map (gray) gives x 2 , and the second iteration (black) gives x 2 . The respective positions of x 1 , x 2 , x 3 allow us to determine q 3 .
Axioms 13 00130 g005
Figure 6. The spectrum of q 13 Lo (black) versus the string binary value (from 0 to 2 12 1 ) for the logistic map at λ = 4 and the one from a random distribution q 13 (red).
Figure 6. The spectrum of q 13 Lo (black) versus the string binary value (from 0 to 2 12 1 ) for the logistic map at λ = 4 and the one from a random distribution q 13 (red).
Axioms 13 00130 g006
Figure 7. The ED 13 versus a for the logarithm map x n + 1 = ln ( a | x n | ) .
Figure 7. The ED 13 versus a for the logarithm map x n + 1 = ln ( a | x n | ) .
Axioms 13 00130 g007
Figure 8. The KL-divergence for the data.
Figure 8. The KL-divergence for the data.
Axioms 13 00130 g008
Figure 9. The spectrum of q 8 versus the string binary value (from 0 to 2 7 1 ) for the bel20 financial data.
Figure 9. The spectrum of q 8 versus the string binary value (from 0 to 2 7 1 ) for the bel20 financial data.
Axioms 13 00130 g009
Table 1. K values, for different m-embeddings.
Table 1. K values, for different m-embeddings.
m34567
K P E 6241207205040
K m P E 1373501405137,633
K E D 48163264
Table 2. q m values, for different m-embeddings, ordered by the binary representation of the string.
Table 2. q m values, for different m-embeddings, ordered by the binary representation of the string.
s = 0123456789101112131415
6 q 3 = 1221
24 q 4 = 13533531
120 q 5 = 14969161144111696941
Table 3. ED m values, for different m-embeddings.
Table 3. ED m values, for different m-embeddings.
ED 2 = 1
ED 3 = 1 3 + log 2 ( 3 ) = 1.9183
ED 4 = 3 + 1 2 log 2 ( 3 ) 5 12 log 2 ( 5 ) = 2.82501
ED 5 = 47 30 + 3 10 log 2 ( 3 ) + log 2 ( 5 ) 11 60 log 2 ( 11 ) = 3.72985
Table 4. q m values for π , for different m-embeddings.
Table 4. q m values for π , for different m-embeddings.
6 q 3 = 0.9822.012.010.991
24 q 4 = 0.9243.005.053.003.005.053.000.960
120 q 5 = 0.7563.869.105.929.2316.011.04.033.8611.116.29.10
5.789.224.030.768
Table 5. ED m values for π , for different m-embeddings.
Table 5. ED m values for π , for different m-embeddings.
ED 2 = 0.999998
ED 3 = 1.91361
ED 4 = 2.81364
ED 5 = 3.71059
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Nardone, P.; Sonnino, G. Entropy of Difference: A New Tool for Measuring Complexity. Axioms 2024, 13, 130. https://doi.org/10.3390/axioms13020130

AMA Style

Nardone P, Sonnino G. Entropy of Difference: A New Tool for Measuring Complexity. Axioms. 2024; 13(2):130. https://doi.org/10.3390/axioms13020130

Chicago/Turabian Style

Nardone, Pasquale, and Giorgio Sonnino. 2024. "Entropy of Difference: A New Tool for Measuring Complexity" Axioms 13, no. 2: 130. https://doi.org/10.3390/axioms13020130

APA Style

Nardone, P., & Sonnino, G. (2024). Entropy of Difference: A New Tool for Measuring Complexity. Axioms, 13(2), 130. https://doi.org/10.3390/axioms13020130

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop