*2.4. Fisher Information Measure (FIM)*

In the last decades, Fisher information has been increasingly gaining the interest of scientists of different scientific fields. It was first introduced by Fisher [59] as a representation of the amount of information in the results of experimental measurements of an unknown parameter of a stochastic system, or simply the amount of information that can be extracted from a set of measurements (or the "quality" of the measurements) [60]. Fisher information is a useful method to study non-stationary and complex time series [61]. It is used as a measure of the level of disorder of a system, behaving inversely to entropy, i.e., when the disorder increases, the entropy increases, while the Fisher information decreases. Fisher information has been successfully applied to many different systems, revealing its ability in describing the complexity of them [62–64]. Additionally, its use has been suggested to identify reliable precursors of critical events [65–67]. Moreover, Fisher information presents the so-called "locality" property in contrast to the "globality" of entropy, referring to the sensitivity of Fisher information in changes in the shape of the probability distribution corresponding to the measured variable, not presented by entropy [68,69]. The Fisher information measure can be expressed as

$$I\_x = \sum\_{n=1}^{N-1} \frac{\left[p(\mathbf{x}\_{n+1}) - p(\mathbf{x}\_n)\right]^2}{p(\mathbf{x}\_n)}.\tag{18}$$

The discrete probability distribution *p*(*xn*) corresponds to the specific values of the unknown underlying probability density function at the center values of the intervals {*xn*}, which are not necessarily of equal length. The probability density function is usually approximated by a histogram, or by the kernel density estimator technique, employing different kernel functions such as the Gaussian kernel or Epanechnikov kernel [60].
