*2.2. Fuzzy Entropy (FuzzyEn)*

Expanding upon the concepts already established with approximate entropy (*ApEn*) and sample entropy (*SampEn*), Chen et al. [53,54] combined elements from Fuzzy sets and information theory to develop a fuzzy version of the *SampEn*. Fuzzy entropy (*FuzzyEn*) like its ancestors, *ApEn* and *SampEn* [54], is a "regularity statistic" that quantifies the (un)predictability of fluctuations in a time series. For the estimation of *FuzzyEn*, the similarity between vectors is defined based on fuzzy membership functions and the vectors' shapes. The gradual and continuous boundaries of the fuzzy membership functions lead to a series of advantages, such as the continuity as well as the validity of *FuzzyEn* at small values, higher accuracy, stronger relative consistency, and even less dependence on the data length. *FuzzyEn* can be considered as an upgraded alternative of *SampEn* (and *ApEn*) for the evaluation of complexity, especially for short time series contaminated by noise [55].

Similar to *SampEn*, *FuzzyEn* excludes self-matches. Nevertheless, it applies a slightly different definition for the employed first *N* − *m* vectors of a length of m, by removing a baseline, *si*:

$$\mathfrak{F}\_i = m^{-1} \sum\_{j=0}^{m-1} s\_{i+j\prime} \tag{7}$$

i.e., for the *FuzzyEn* estimations, we use the first *N* − *m* of the vectors:

$$\mathbf{X}\_{i}^{m} = \{\mathbf{s}\_{i\prime} \; \mathbf{s}\_{i+1\prime}, \dots, \mathbf{s}\_{i+m-1}\} - \overline{\mathbf{s}}\_{i\prime} \; i = 1, 2, \dots, N - m + 1,\tag{8}$$

Then, the similarity degree, *D<sup>m</sup> ij* , between each pair of vectors, **<sup>X</sup>***<sup>m</sup> <sup>j</sup>* and **<sup>X</sup>***<sup>m</sup> <sup>i</sup>* , being within a distance, *r*, from each other is defined by a fuzzy membership function:

$$D\_{ij}^{m} = \mu \left( d\_{ij}^{m}, r \right), \tag{9}$$

where *d<sup>m</sup> ij* is, as in the case of *ApEn* and *SampEn*, the supremum norm difference between **X***<sup>m</sup> <sup>i</sup>* and **<sup>X</sup>***<sup>m</sup> <sup>j</sup>* . For each vector, **<sup>X</sup>***<sup>m</sup> <sup>i</sup>* , we estimate the average similarity degrees with respect to all other vectors, **X***<sup>m</sup> <sup>j</sup>* , *j* = 1, 2, . . . , *N* − *m* + 1, and *j* = *i* (i.e., excluding itself):

$$\phi\_i^m(r) = \left(N - m - 1\right)^{-1} \sum\_{i=1, j \neq i}^{N-m} D\_{ij}^m. \tag{10}$$

Then, we evaluate

$$\boldsymbol{\phi}^{m}(\boldsymbol{r}) = \left(\boldsymbol{N} - \boldsymbol{m}\right)^{-1} \sum\_{i=1}^{N-m} \boldsymbol{\phi}\_{i}^{m}(\boldsymbol{r}),\tag{11}$$

and

$$
\phi^{m+1}(r) = \left(N - m\right)^{-1} \sum\_{i=1}^{N-m} \phi\_i^{m+1}(r). \tag{12}
$$

The *FuzzyEn*(*m*,*r*) is then defined as

$$Fuxzy \to n(m, r) = \lim\_{N \to \infty} \left[ \ln \varphi^m(r) - \ln \varphi^{m+1}(r) \right],\tag{13}$$

which, for finite time series, can be calculated by the statistic

$$FuzzyEn(m, r, N) = \ln \phi^m(r) - \ln \phi^{m+1}(r). \tag{14}$$

As mentioned above, *FuzzyEn* is a measure of estimation of the complexity. More specifically, lower *FuzzyEn* values demonstrate a larger chance that a set of data will be followed by similar data (regularity). Hence, lower values demonstrate larger regularity. Conversely, a greater value of *FuzzyEn* indicates a smaller chance of similar data being repeated (irregularity). Thus, greater values convey more randomness, disorder, and system complexity. Consequently, a low (high) value of *FuzzyEn* reflects a high (low) degree of regularity [42].
