**1. Introduction**

We aim to perform a local entropic analysis of the evolution of the temperature during a full-scale fire experiment and seek a straightforward, general, and process-based model of the compartment fire. We propose a new statistical complexity and compare known algorithms dedicated to the extraction of the underlying probabilities, checking their suitability to point out the abnormal values and structure of the experimental time series. For recent research on the fire phenomena performed using entropic tools, see Takagi, Gotoda, Tokuda and Miyano [1] and Murayama, Kaku, Funatsu, and Gotoda [2].

The experimental data was collected during a full-scale fire experiment conducted at the Fire O fficers Faculty in Bucharest. We briefly include here the description of the experimental setup (Materials and Methods). Details can be found in [3].

The experiment has been carried out using a container (single-room compartment) having the following dimensions: 12 m × 2.2 m × 2.6 m. A single ventilation opening was available, namely the front door of the container, which remained open during the experiment. Parts of the walls and the ceiling of the container were furnished with oriented strand boards (OSB). The fire source has been a wooden crib made of 36 pieces of wood strips 2.5 cm × 2.5 cm × 30 cm, on which has been poured 500 mL ethanol shortly before ignition. The fire bed was situated in a corner of the compartment, at 1.2 m below the ceiling. The measurement devices consisted of six built-in K-type thermocouples, which were fixed at key locations (see Figure 1) and connected to a data acquisition logger. Flames were observed to impinge on the ceiling and exit through the opening, and we also noted the ignition of crumpled newspaper and stages of fire development that are known as indicators of flashover.

**Figure 1.** Arrangement of the flashover container.

In Section 2, we present the theoretical background and briefly summarize the approaches that are used to model fire.

Section 3 is dedicated to the results regarding the analysis of the collected raw data.

#### **2. Theoretical Background and Remarks**

#### *2.1. Entropy and Statistical Complexity*

The natural logarithm is used below, as elsewhere in this paper.

Shannon's entropy [4] is defined as H(P) = − ni=<sup>1</sup> pilogpi, where P = p1, ... , pnis a finite probability distribution. It is nonnegative and its maximum value is H(U) = logn, where U = 1n , ... , 1n . Throughout the paper, we use the convention 0· log 0 = 0.

The Kullback-Leibler divergence [5] is defined by

$$\text{PD}(\text{P} \| \text{R}) = \sum\_{i=1}^{n} \mathbf{p}\_i (\log \| \mathbf{p}\_i - \log \| \mathbf{r}\_i) \tag{1}$$

where P = p1, ... , pn and R = (r1, ... , rn) are probability distributions. It is nonnegative and it vanishes for P = R.

If the value 0 appears in probability distributions P = p1, ... , pnand R = (r1, ... , rn), it must appear in the same positions for the sake of significance. Otherwise, one usually consider the conventions 0 log 0b = 0 for b ≥ 0 and a log a0 = ∞ for a > 0. We remark that these are strong limitations and such conditions rarely occur in practice.

To overcome this issue, the following divergence, well-defined, is used in the literature.

The Jensen-Shannon divergence (see [6,7]) is given by

$$\text{LHS}(\text{P} \| \text{R}) = \frac{1}{2} \text{D} \{ \text{P} \| \frac{\text{P} + \text{R}}{2} \} + \frac{1}{2} \text{D} \{ \text{R} \| \frac{\text{P} + \text{R}}{2} \} = \text{H} \{ \frac{\text{P} + \text{R}}{2} \} - \frac{\text{H}(\text{P}) + \text{H}(\text{R})}{2}. \tag{2}$$

The disequilibrium-based statistical complexity (LMC statistical complexity) introduced in 1995 by <sup>L</sup>ópez-Ruiz, Mancini, and Calbet in [8] is defined as C(P) = D(P) H(P) log n , where <sup>D</sup>(P), which is interpreted as disequilibrium, is the quadratic distance D(P) = ni=<sup>1</sup> (pi − 1n )2.

*Symmetry* **2020**, *12*, 22

Interpreted as entropic non-triviality (in Lamberti et al. [9] and Zunino et al. [10]), the Jensen-Shannon statistical complexity is defined by C(JS)(P) = Q(JS)(P) H(P) log n , where the disequilibrium Q(JS)(P) is Q(JS)(P) = k·JS(PU). Here, k = (maxPJS(PU))−<sup>1</sup> is the normalizing constant and U = 1n, ... , 1n. Therefore, we have 0 ≤ C(JS)(P) ≤ 1.

For the convenience of the interested reader, we include the following method to determine the normalizing constant (a result stated for computational purposes, without proof, in [9]).

**Proposition 1.** *Using the above notation, for the computation of the normalizing constant,* k = (maxPJS(PU))−1*, the maximum is attained for* P *such that there exists* i, pi = 1*.*

$$\text{It holds that } \mathbf{k} = \left(\log 2 - \frac{1}{2}\log\frac{\mathbf{n}+1}{\mathbf{n}} - \frac{\log(\mathbf{n}+1)}{2\mathbf{n}}\right)^{-1}.$$

**Proof.** We have the following calculations:

$$\text{JS}(\text{P}\|\text{U}) = \text{H}\left(\frac{\text{P} + \text{U}}{2}\right) - \frac{\text{H}(\text{P}) + \text{H}(\text{U})}{2} = \frac{1}{2} \sum\_{i=1}^{n} \text{p}\_{i} \log \text{p}\_{i} - \frac{1}{2} \log \text{n} - \sum\_{i=1}^{n} \left(\frac{\text{P}\_{i}}{2} + \frac{1}{2\text{n}}\right) \log\left(\frac{\text{P}\_{i}}{2} + \frac{1}{2\text{n}}\right) . \tag{3}$$

$$\frac{\partial \text{JS}(\mathbf{P} \| \mathbf{U})}{\partial \mathbf{p}\_{\mathrm{i}}} = \frac{1}{2} \log \mathbf{p}\_{\mathrm{i}} + \frac{1}{2} - \frac{1}{2} \log \left( \frac{\mathbf{P}\_{\mathrm{i}}}{2} + \frac{1}{2\mathbf{n}} \right) - \frac{1}{2} = \frac{1}{2} \log \mathbf{p}\_{\mathrm{i}} - \frac{1}{2} \log \left( \frac{\mathbf{P}\_{\mathrm{i}}}{2} + \frac{1}{2\mathbf{n}} \right). \tag{4}$$

$$\frac{\partial^2 \|\mathbf{S}(\mathbf{P} \| \mathbf{U})\|}{\partial \mathbf{p}^2} = \frac{1}{2\mathbf{p}\_{\mathrm{i}}} - \frac{1}{4\left(\frac{\mathbf{P}\_{\mathrm{i}}}{2} + \frac{1}{2\mathbf{n}}\right)} = \frac{1}{2\mathbf{p}\_{\mathrm{i}}} - \frac{1}{2\mathbf{p}\_{\mathrm{i}} + \frac{2}{\mathbf{n}}} > 0, \\ \frac{\partial^2 \|\mathbf{S}(\mathbf{P} \| \mathbf{U})\|}{\partial \mathbf{p}\_{\mathrm{i}} \partial \mathbf{p}\_{\mathrm{j}}} = 0. \tag{5}$$

So, the Hessian of JS(PU) is everywhere positive definite, whence JS(PU) is (strictly) convex on the open convex set p1, ... , pn : 0 < pi < 1 for all i, ni=<sup>1</sup> pi = 1. Therefore, JS(PU) cannot have a maximum inside (otherwise, it would be constant), and the points of maximum must lie on the boundary. See Theorem 3.10.10 in [11] (p. 171). Such points exist, because JS(PU) is continuous on the compact set Δ = p1, ... , pn : 0 ≤ pi ≤ 1 for all i, ni=<sup>1</sup> pi = 1. The function JS(PU) is continuous and convex on the compact convex set Δ, so its maximum lies on the set of vertices of Δ (where pi = 1 for one i). See Theorem 3.10.11 in [11] (p. 171). Since JS(PU) does not depend on the order of the components of P, the maximum value is attained at all vertices, so it can be straightforwardly computed by setting P = (1, 0, ... , <sup>0</sup>). -

**Remark 1.** *Note that the maximal value of* JS(PU) *is* log 2 − 12 log n+1 n − log(n+<sup>1</sup>) 2n log 2, *as* n → ∞. *Since* JS(PU) *is bounded from above by* log 2*, independently of* n*, the normalization of* JS(PU) *in the definition of the Jensen-Shannon complexity does not seem to be relevant, and one could simply consider* JS(PU) H(P) log n *. Let*

 λ ∈ [0, 1]*. The parametric Jensen-Shannon divergence (see for instance, [6]) is given by*

$$\begin{split} \|\mathbf{f}\|\_{\lambda} (\mathbf{P} \|\mathbf{R}) &= (1 - \lambda) \mathbf{D} (\mathbf{P} \| (1 - \lambda) \mathbf{P} + \lambda \mathbf{R}) + \lambda \mathbf{D} (\mathbf{R} \| (1 - \lambda) \mathbf{P} + \lambda \mathbf{R}) \\ &= \mathbf{H} ((1 - \lambda) \mathbf{P} + \lambda \mathbf{R}) - ((1 - \lambda) \mathbf{H} (\mathbf{P}) + \lambda \mathbf{H} (\mathbf{R})) . \end{split} \tag{6}$$

*It is positive and it vanishes for* P = R *or* λ = 0 or 1*. See also Figure 2.*

*The values* 1 − λ *and* λ *are interpreted as a priori probabilities. Note that* JSλ(PR) = JS1−<sup>λ</sup>(RP) *and* JSλ *is not symmetric, unless* λ = 0.5.

Mutatis mutandis, from Donald's identity (Lemma 2.12 in [12]), one has

$$\text{R}\_{\lambda} \text{S}\_{\lambda}(\text{P} \| \text{R}) + \text{D}((1 - \lambda)\text{P} + \lambda \text{R} \| \text{Q}) = (1 - \lambda)\text{D}(\text{P} \| \text{Q}) + \lambda\text{D}(\text{R} \| \text{Q})\tag{7}$$

for an arbitrarily fixed λ ∈ [0, 1]. One needs only straightforward computation to check that it holds. Therefore,

$$\begin{array}{l} \text{JS}\_{\lambda}(\text{P} \| \text{R}) = \min \{ (1 - \lambda) \text{D}(\text{P} \| \text{Q}) + \lambda \text{D}(\text{R} \| \text{Q}) : \text{Q} = \text{C} \}, \\\ (\mathbf{q}\_{1}, \dots, \mathbf{q}\_{n}) \text{ is a finite probability distribution} \}. \end{array} \tag{8}$$

We introduce the parametric Jensen-Shannon statistical complexity as

$$\mathbf{C}\_{\lambda}^{(\text{JS})}(\mathbf{P}) \equiv \mathbf{J} \mathbf{S}\_{\lambda}(\mathbf{P} \| \mathbf{U}) \frac{\mathbf{H}(\mathbf{P})}{\log \mathbf{n}}.\tag{9}$$

As in the case of the complexities <sup>C</sup>(P), C(JS) (P), the new ones, C(JS) λ (P), would be zero (minimum complexity) for P = U or if there exists i such that pi = 1. These two cases describe very di fferent states of the system, both of which are extreme circumstances being considered simple, namely the states with respectively maximum and minimum entropy.

We do not need to normalize JSλ(PU) in the definition of the parametric Jensen-Shannon complexity (possibly one can feel more comfortable with its normalized version in other frameworks), but we stress that one can easily prove, following the same recipe as above, that its maximum value is attained for P such that there exists i, pi = 1.

**Figure 2.** The parametric Jensen-Shannon divergence JSλ(P<sup>1</sup> − <sup>P</sup>), for P = (t, 1 − t), t ∈ [0, 1].

**Proposition 2.** *Let* λ ∈ [0, 1]*. Using the above notation, it holds*

$$\max\_{\mathbf{P}} \| \mathbf{S}\_{\lambda}(\mathbf{P} \| \mathbf{U}) = -\lambda \log \lambda - (1 - \lambda) \log(1 - \lambda) - (1 - \lambda) \log \left( 1 + \frac{\lambda}{(1 - \lambda)\mathbf{n}} \right) - \frac{\lambda}{\mathbf{n}} \log \frac{(1 - \lambda)\mathbf{n} + \lambda}{\lambda} . \tag{10}$$

$$\text{Moreover, } \max\_{\mathcal{P}} \| \mathcal{S}\_{\lambda} \left( P \| \mathcal{U} \right) \nearrow -\lambda \log \lambda - (1 - \lambda) \log \left( 1 - \lambda \right) \le \log 2, \text{ as } n \to \infty.$$

**Proof.** We omit the computation of max P JSλ(PU), which is straightforward.

To justify the monotonicity, it is enough to prove that f(x) = λ xlog (1gλ)x+<sup>λ</sup> λis decreasing:

$$f'(\mathbf{x}) = -\frac{\lambda}{\mathbf{x}^2} \left[ \frac{\lambda}{(1-\lambda)\mathbf{x} + \lambda} - 1 - \log \frac{\lambda}{(1-\lambda)\mathbf{x} + \lambda} \right] < 0,\\ \text{for } \lambda \in (0,1) \text{ and } \mathbf{x} > 0. \tag{11}$$

Furthermore, it is obvious that (1 − λ)log 1 + λ (<sup>1</sup>−<sup>λ</sup>)<sup>n</sup> + λ n log (<sup>1</sup>−<sup>λ</sup>)n+<sup>λ</sup> λ → 0.

The last inequality follows from Jensen's inequality, which is applied to the concave logarithmic function.

Therefore, JSλ(PU) is bounded from above by log 2, independently of n. - **Remark 2.** *We split this result into two inequalities (of independent interest), which can be proved by the same technique. Namely, it holds that*

$$\max\_{\mathbf{P}} \text{D}(\mathbf{P} \| (1 - \lambda)\mathbf{P} + \lambda \mathbf{U}) \nearrow -\log(1 - \lambda) \tag{12}$$

*and*

$$\max\_{\mathbf{P}} \text{D}(\mathbf{U} \| (1 - \lambda)\mathbf{P} + \lambda \mathbf{U}) \nearrow -\log \lambda,\tag{13}$$

*as* n → ∞*.*

**Proposition 3.** *Let* λ, μ ∈ [0, 1]*. Using the above notation, the following inequality holds:*

$$\min\{\frac{1-\lambda}{1-\mu'},\frac{\lambda}{\mu}\} \text{JS}\_{\mu}(\text{P} \| \text{R}) \le \text{JS}\_{\lambda}(\text{P} \| \text{R}) \le \max\{\frac{1-\lambda}{1-\mu'},\frac{\lambda}{\mu}\} \text{S}\_{\mu}(\text{P} \| \text{R})\tag{14}$$

*where* P = p1, ... , pn *and* R = (r1, ... , rn) *are two finite probability distributions.*

**Proof.** The result is a particular case of Theorem 3.2 from [13]. We include here an alternative proof for the sake of completeness.

It is known that the entropy H is concave; Hence, H((1 − λ)P + λR) ≥ (1 − λ)H(P) + <sup>λ</sup><sup>H</sup>(R). We prove that

$$\min\{\frac{1-\lambda}{1-\mu'},\frac{\lambda}{\mu}\} \left[\mathcal{H}((1-\mu)\mathcal{P}+\mu\mathcal{R})-(1-\mu)\mathcal{H}(\mathcal{P})-\mu\mathcal{H}(\mathcal{R})\right] \le\tag{15}$$

$$\mathrm{H}((1-\lambda)\mathrm{P}+\lambda\mathrm{R})-(1-\lambda)\mathrm{H}(\mathrm{P})-\lambda\mathrm{H}(\mathrm{R})\le\tag{16}$$

$$\max\left\{\frac{1-\lambda}{1-\mu}, \frac{\lambda}{\mu}\right\} \left[\mathcal{H}((1-\mu)\mathcal{P}+\mu\mathcal{R})-(1-\mu)\mathcal{H}(\mathcal{P})-\mu\mathcal{H}(\mathcal{R})\right].\tag{17}$$

We consider 0 ≤ 1−λ 1−μ ≤ λμ , so λ ≥ μ , which implies, by the concavity of H, that

$$\left[\left(1-\lambda\right)\mathbf{H}(\mathbf{P}) + \lambda\mathbf{H}(\mathbf{R}) + \min\{\frac{1-\lambda}{1-\mu}, \frac{\lambda}{\mu}\}\left[\mathbf{H}((1-\mu)\mathbf{P} + \mu\mathbf{R}) - (1-\mu)\mathbf{H}(\mathbf{P}) - \mu\mathbf{H}(\mathbf{R})\right] = \mathbf{I} \tag{18}$$

$$\left[ (1 - \lambda) \mathcal{H}(\mathcal{P}) + \lambda \mathcal{H}(\mathcal{R}) + \frac{1 - \lambda}{1 - \mu} [\mathcal{H}((1 - \mu)\mathcal{P} + \mu \mathcal{R}) - (1 - \mu)\mathcal{H}(\mathcal{P}) - \mu \mathcal{H}(\mathcal{R})] \right] = \tag{19}$$

λ − μ 1 − μ H(R) + 1 − λ 1 − μ<sup>H</sup>((<sup>1</sup> − μ)<sup>P</sup> + μ<sup>R</sup>) ≤ Hλ − μ 1 − μR + 1 − λ 1 − μ((<sup>1</sup> − μ)<sup>P</sup> + μR) = H((1 − λ)P + <sup>λ</sup><sup>R</sup>), (20)

because it holds λ−μ 1−μ + 1−λ 1−μ = 1 and λ − μ ≥ 0.

For the second inequality, we have

$$\left[ (1 - \lambda) \mathcal{H}(\mathcal{P}) + \lambda \mathcal{H}(\mathcal{R}) + \max \{ \frac{1 - \lambda}{1 - \mu'}, \frac{\lambda}{\mu} \} \Big| \mathcal{H}((1 - \mu)\mathcal{P} + \mu \mathcal{R}) - (1 - \mu)\mathcal{H}(\mathcal{P}) - \mu \mathcal{H}(\mathcal{R}) \right] = \tag{21}$$

$$(1 - \lambda)\mathcal{H}(\mathcal{P}) + \lambda\mathcal{H}(\mathcal{R}) + \frac{\lambda}{\mu}[\mathcal{H}((1 - \mu)\mathcal{P} + \mu\mathcal{R}) - (1 - \mu)\mathcal{H}(\mathcal{P}) - \mu\mathcal{H}(\mathcal{R})] = 0$$

$$-\frac{\lambda - \mu}{\mu}\mathcal{H}(\mathcal{P}) + \frac{\lambda}{\mu}\mathcal{H}((1 - \mu)\mathcal{P} + \mu\mathcal{R}) \ge \mathcal{H}((1 - \lambda)\mathcal{P} + \lambda\mathcal{R}), \tag{22}$$

because it holds that λμH((1 − μ)<sup>P</sup> + μ<sup>R</sup>) ≥ H((1 − λ)P + λR) + <sup>λ</sup>−μμ <sup>H</sup>(P), which is equivalent to

$$\mathcal{H}((1-\mu)\mathcal{P}+\mu\mathcal{R}) \ge \frac{\mu}{\lambda}\mathcal{H}((1-\lambda)\mathcal{P}+\lambda\mathcal{R}) + \frac{\lambda-\mu}{\lambda}\mathcal{H}(\mathcal{P}).\tag{23}$$

For 0 ≤ λμ ≤ 1−λ 1−μ , the proof is similar. -

**Remark 3.** *For* λ ∈ [0, 1]*,* μ ∈*(0,1), and* R = U *in Equation (1), then the following inequality holds:*

$$\min\{\frac{1-\lambda}{1-\mu},\frac{\lambda}{\mu}\} \mathrm{IS}\_{\mu}(\mathrm{P} \| \mathrm{U}) \le \mathrm{JS}\_{\lambda}(\mathrm{P} \| \mathrm{U}) \le \max\{\frac{1-\lambda}{1-\mu},\frac{\lambda}{\mu}\} \mathrm{IS}\_{\mu}(\mathrm{P} \| \mathrm{U}).\tag{24}$$

*For* μ = 12*in Equation (3), we obtain:*

$$2\min\{1-\lambda,\lambda\}\mathbb{S}(\mathbf{P}|\mathbf{U}) \le \|\mathbf{S}\_{\lambda}(\mathbf{P}|\mathbf{U})\| \le 2\max\{1-\lambda,\lambda\}\|\mathbf{S}(\mathbf{P}|\mathbf{U}).\tag{25}$$

*Multiplying by* H(P) logn *in Equation (4), we deduce the following inequality related to the parametric Jensen-Shannon statistical complexity:*

$$2\text{kmmin}(1-\lambda,\lambda)\mathbf{C}^{\text{(fS)}}(\mathbf{P}) \le \mathbf{C}\_{\lambda}^{\text{(fS)}}(\mathbf{P}) \le 2\text{kmmax}(1-\lambda,\lambda)\mathbf{C}^{\text{(fS)}}(\mathbf{P}),\tag{26}$$

*where* k = log 2 − 12 log n+1 n − log(n+<sup>1</sup>) 2n *.*

#### *2.2. Extraction of the Underlying Probability Distribution*

The permutation entropy (PE) [14] quantifies randomness and the complexity of a time series based on the appearance of ordinal patterns, that is on comparisons of neighboring values of a time series. For other details on the PE algorithm applied to the present experimental data, see [3].

Let T = (t1, ... , tn) be a time series with distinct values.

**Step 1**. The increasing rearranging of the components of each j-tuple ti, ... , ti+j−<sup>1</sup>as ti<sup>+</sup>r1−1, ... , ti<sup>+</sup>rj−<sup>1</sup> yields a unique permutation of order j denoted by π = r1, ... , rj, which is an encoding pattern that describes the up-and-downs in the considered j-tuple.

Simple numerical examples may help clarify the concepts throughout this section.

**Example 1.** *For the five-tuple* (2.3, 1, 3.1, 6.1, 5.2)*, the corresponding permutation (encoding) is* (2, 1, 3, 5, <sup>4</sup>).

**Step 2**. The absolute frequency of this permutation (the number of j-tuples which are associated to this permutation) is

$$\mathbf{k}\_{\pi} \equiv \# \{ \mathbf{i} : \mathbf{i} \le \mathbf{n} - (\mathbf{j} - \mathbf{1}), \ (\mathbf{t}\_{\text{i}}, \dots, \mathbf{t}\_{\text{i} + \mathbf{j} - \mathbf{1}}) \text{ is of type } \pi\}. \tag{27}$$

These values have the sum equal to the number of all the consecutive j-tuples; that is, n − (j − <sup>1</sup>). **Step 3**. The permutation entropy of orderj is defined as PE(j) ≡ − π pπlogpπ, where pπ = kπ <sup>n</sup>−(j−<sup>1</sup>) is the relative frequency.

In [14], the measured values of the time series are considered distinct. The authors neglect equalities and propose to break them by adding small random perturbations (random noise) to the original series.

Another known approach is to rank the equalities according to their order of emergence (to rank the equalities with their sequential/chronological order, see for instance [15,16]). We use this method throughout the paper to compute PE(j) for j = 3, 4, 5.

Applying the PE algorithm for experimental fire data, C(JS) λ (P) cannot be zero. The number of the encoding patterns that occur is >1, and these patterns are not equiprobable: some patterns may be rare or locally forbidden (that is, one encounters such patterns at some thermocouples, but not in all six time series), as discussed in [3].

We briefly describe now the encoding steps in the TLPE algorithm (Two-Length Permutation Entropy algorithm) given by Watt and Politi in [17]; other details are provided in [3].

**Step 1** given the j-tuple T = t1, ... , tj, we start encoding the last k ≤ j elements tj−k+1, ... , tj according to the ordinal position of each element; that is, every ts is replaced by a symbol which indicates the position occupied by ts within the increasing rearranging of the considered k-tuple.

Next, we proceed by encoding each previous element tm up to m = 1 according to the symbol provided by **Step 1** applied to the k-tuple (tm, ... , tm+k−<sup>1</sup>).

**Example 2.** *Encoding obtained by the chronological ordering of equal values* (4.1, 4.1, 4.1, 5, 2.1) → (1, 1, 2, 3, 1) *for* k = *3 and* j = *5.*

**Step 2** and **Step 3**, they coincide with **Step 2** and **Step 3** in the PE algorithm above.

This algorithm leads, after computing the relative frequencies of the encoding sequences, to the two-length permutation entropy (TLPE (k, j)).

Given the pair (k, j) of values, the number of symbolic (encoding) sequences of length j is k!kj−k, which is a number that can be much smaller than j!, so this algorithm is faster, it involves a simplified computation, and sometimes it makes the results more relevant for big values of j.

We deal with the equal values by using the same method as for PE; that is, we consider them ordered chronologically.

In the next section, we apply the above techniques and observe their capability to discern the changes of the parametric Jensen-Shannon statistical complexity of the experimental data.

#### **3. Raw Data Analysis**

The raw data set under consideration consists of measured temperatures during a compartment fire: six thermocouples T1, ... , T6 measure the temperatures every second during the experiment. Hence, we ge<sup>t</sup> six time series consisting of 3046 entries (data points), and we aim to a better understanding of these results by modeling the time series using information theory, and to assess the performance of the discussed statistical complexities.

We plot the parametric Jensen-Shannon statistical complexity against the parameter (for λ ∈ {0, 0.2, ... , 1}). We notice the unusual plotting for the time series at T5, which is definitely not caused by the position of this thermocouple. The graph corresponding to the time series at T5 is far from the rest of the graphs for the other thermocouples; hence, smaller values were obtained for the statistical complexities, with no apparent experiment related or mathematical reason. See Figures 3–7.

We conclude that the PE and TLPE algorithms can be successfully used to detect unusual data collected in fire experiments: different embedding dimensions and different algorithms used to determine the underlying probabilities provide the same conclusion, the hierarchy among the statistical complexities established for the thermocouples T1–T5 is the same, and T5 is always at a bigger distance from the rest of them. The position of the thermocouple T5 does not justify this big difference (see Figure 1). This also agrees with the smaller values provided at T5 by the LMC statistical complexity in Figure 8.

It is not clear in [9] why only the disequilibrium provided by JS has been considered and why JSλ has been avoided. Using experimental data, we have verified that the parametric Jensen-Shannon complexity can be used for the analysis of the time series related to the fire dynamics: except for the trivial cases λ = 0 or 1, the results are not altered by the non-symmetry of JSλ for λ - 0.5 (however, the embedding dimension j has to be adequate to the amount of data), so one can draw similar conclusions as for λ = 0.5. See Figure 8 (the plots obtained by PE(3), PE(4), TLPE(3,5), and TLPE(2,5) look similar, so we do not include them here). We have limitations for the choice of the embedding dimension j, since the factorial increases fast, and one then requires a bigger amount of data n. So, as a guideline for choosing the embedding dimension, the value of the statistical complexities remains relevant for j such that n j!. See also [18].

Moreover, the proposed parametric Jensen-Shannon statistical complexities complement and validate the information provided bythe usual LMC and Jensen-Shannon statistical complexities. See Figures 8 and 9 for a quick comparison to the descriptions provided by the Jensen-Shannon and

LMC statistical complexities. According to our findings, the parametric Jensen-Shannon statistical complexity is a valid tool for the analysis of the evolution of the temperature in compartment fire data. The slight differences that appear between the upper line (corresponding to λ = 0.5) in Figure 9 and the one in Figure 10 are because the Jensen-Shannon complexity [9] is defined using the normalized Jensen-Shannon divergence, while we introduced the parametric Jensen-Shannon divergence in the LMC style, that is without normalizing the disequilibrium.

**Figure 3.** Plot obtained using the PE(5) algorithm. PE: permutation entropy.

**Figure 4.** Plot obtained using the PE(4) algorithm.

**Figure 5.** Plot obtained using the PE(3) algorithm.

**Figure 6.** Plot obtained using the TLPE(3,5) algorithm.TLPE: Two-Length Permutation Entropy algorithm.

**Figure 7.** Plot obtained using the TLPE(2,5) algorithm.

**Figure 8.** Statistical complexity.

**Figure 9.** Jensen-Shannon statistical complexity (λ = 1/2).

**Figure 10.** Obtained for the parametric Jensen-Shannon complexity using the PE(5)algorithm.

The most relevant aspect is that by applying the formula for the parametric Jensen-Shannon complexity, one gets similar plots, regardless of the embedding dimension and the encoding type algorithms used to determine the probability distribution, so the analysis is coherent and not misleading. Similarities with other in use complexity formulae would certainly improve the whole picture and bring us one step closer to the understanding of their ability to capture the behavior of various phenomena, in this particular case the fire dynamics. See Figures 8–10. We remark that these types of similarities might yield further mathematical results stating relationships among these mathematical notions: the (parametric) Jensen-Shannon and the LMC complexities.

#### **4. Concluding Remarks on the Limitations of Our Study**

The newly proposed complexities are used to analyze a full-scale experimental data set collected from a compartment fire.

For various algorithms and various embedding dimensions, more comparisons can be performed from this point onwards. We could not answer the questions about the merits and demerits of the known statistical complexities: such aspects are not ye<sup>t</sup> clear in the literature, even in other frameworks where the permutation entropy has already been used by many researchers. Therefore, we discussed the relevance of the use of statistical complexities in the framework of fire data: small changes in the algorithms or choosing different embedding dimensions does not affect the interpretation of the results and the conclusions. This means that this new mathematical tool (the parametric Jensen-Shannon complexity) is informally staying "stable" in the framework of fire data. The accuracy of the interpretations can definitely

be improved by the choice of the parameters, but the degree of its change cannot be estimated out of the data gathered in just one experiment: further research is required.

Other recent results on the analysis of this data set can be found in [19]. To understand this material the reader is referred to [20]. For the use of the permutation entropy in another framework see [21,22].

Our results might also indicate a turbulenceor a malfunction of the thermocouple T5 (an improperly calibratedscale);however,itisbeyondthescope of thepresentpapertodiscussitindetail.

**Author Contributions:** The work presented here was carried out in collaboration between all authors. All authors contributed equally and significantly in writing this article. All authors have contributed to the manuscript. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work was supported by a gran<sup>t</sup> of the Romanian Ministry of Research and Innovation, CCCDI-UEFISCDI, project number PN-III-P1-1.2-PCCDI-2017-0350/38PCCDI within PNCDI III.

**Acknowledgments:** The authors thank Eleutherius Symeonidis for the argumentation of the proof of Proposition 1.

**Conflicts of Interest:** The authors declare no conflict of interest.
