*Article* **Lithium-Ion Battery Prognostics through Reinforcement Learning Based on Entropy Measures**

**Alireza Namdari 1, Maryam Asad Samani <sup>2</sup> and Tariq S. Durrani 3,\***

	- **\*** Correspondence: t.durrani@strath.ac.uk; Tel.: +44-(0)-141-548-2540

**Abstract:** Lithium-ion is a progressive battery technology that has been used in vastly different electrical systems. Failure of the battery can lead to failure in the entire system where the battery is embedded and cause irreversible damage. To avoid probable damages, research is actively conducted, and data-driven methods are proposed, based on prognostics and health management (PHM) systems. PHM can use multiple time-scale data and stored information from battery capacities over several cycles to determine the battery state of health (SOH) and its remaining useful life (RUL). This results in battery safety, stability, reliability, and longer lifetime. In this paper, we propose different data-driven approaches to battery prognostics that rely on: Long Short-Term Memory (LSTM), Autoregressive Integrated Moving Average (ARIMA), and Reinforcement Learning (RL) based on the permutation entropy of battery voltage sequences at each cycle, since they take into account vital information from past data and result in high accuracy.

**Keywords:** lithium-ion battery; prognostics; long short-term memory; ARIMA; reinforcement learning

**Citation:** Namdari, A.; Samani, M.A.; Durrani, T.S. Lithium-Ion Battery Prognostics through Reinforcement Learning Based on Entropy Measures. *Algorithms* **2022**, *15*, 393. https:// doi.org/10.3390/a15110393

Academic Editor: Frank Werner

Received: 28 August 2022 Accepted: 20 October 2022 Published: 24 October 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

#### **1. Introduction**

#### *1.1. Lithium-Ion Batteries*

Lithium-ion batteries, as the primary power source in electric vehicles, have attracted significant attention recently and have become a focus of research. It is assumed that lithium-ion batteries have the inherent potential for building future power sources for environmentally friendly vehicles [1].

Lithium-ion batteries are the best option for electrical vehicles due to their high-quality performance, capacity, small volume, light weight, low pollution, and rechargeability with no memory effect [2]. However, battery performance degrades when facing poor pavement conditions, temperature, and load changes. This leads to leakage, insulation damage, and partial short-circuits. Consequential situations can arise if these failures are not detected timeously [3,4]. As an example, several Boeing 787 aircraft caught fire because of lithiumion battery failure in 2013, causing the airliners to be grounded [5]. Hence, it is necessary to detect performance degradations timeously and estimate future battery performance. This is where battery prognostics and health management (PHM) plays an important and vital role. PHM determines the battery state of health prediction (SOH) and battery remaining useful life prediction (RUL) of the product using possible failure information in the system, thus yielding improved system reliability and stability in the actual life-cycle of the battery.

Battery PHM and a battery management system (BMS) are important to ensure the reliable and safe functionality of energy storage units [6]. Battery RUL prediction, battery SOH prediction, and battery capacity fade prediction are among the topics which have drawn more attention from researchers in the recent decade [7]. However, these tasks are very difficult, as battery degradation has a complex nature and numerous factors must be taken into consideration [8,9].

#### *1.2. Entropy Measures*

Entropy is a measurement metric for irregularities in time series data, and is used to quantify the stochastic process in data analyses [10]. It was first introduced in classical thermodynamics, and has applications in diverse fields such as chemistry and physics, biological systems, cosmology, economics, sociology, weather science, climate change research, and information systems. Entropy has expanded to far-ranging fields and systems. Shannon, Permutation, Renyi, Tsallis, Approximate, and Sample entropy measures are some of the conceptions of entropy regularly in use [11].

From the afore-mentioned entropies, permutation entropy (PE) is a simple and robust approach to calculating the complexity of a non-linear system using the order relations between values of a time series and assigning a probability to the ordinal patterns. The permutation entropy measure technique works flexibly; it is computationally efficient, and has a range of several thousand parameter values similar to Lyapunov exponents. PE is discussed in more detail in Reference [12]. In this study, PE of the discharge battery voltage sequences is calculated and used as an input to the proposed models.

#### *1.3. ML and DL Techniques*

Recently, Machine Learning (ML) and Deep Learning (DL) algorithms have found very significant and useful applications in research and practice. These concepts have been used to develop various models for predicting different characteristics in diverse fields. In general, ML and DL algorithms aim to capture information from past data, learn from that data, and apply what they have learned to make informed decisions. Therefore, the associated systems are not required to be broadly programmed in all aspects.

ML is used to synthesize the fundamental relationships between large amount of data to solve real-time problems such as big-data analytics and evolution of information [13]. DL, in turn, is able to process a large number of features and, hence, is preferred when computing huge datasets and unstructured data. DL facilitates analysis and extraction of important information from raw data by using computer systems. [14]. Different types of parameters with various quantities can be applied to the developed models as the input to obtain expected predictive variables as the output.

Deep Learning techniques, including Long Short-Term Memory (LSTM) [15] and Reinforcement Learning (RL) [16], can fit numerical dependent variables and have great generalization ability, and therefore, are applicable to battery data. The LSTM algorithm, a Deep Learning algorithm with multiple gates, performs on the basis of updating and storing key information in the time series data [15], and is applicable to battery prognostics. The RL algorithm, on the other hand—as one of the latest Deep Learning methods and tools—has the capability of creating a simulation of the whole system and making intelligent decisions (i.e., charge, replace, repair, etc.) after it is utilized to predict the battery RUL and SOH for the purpose of battery PHM and BMS [16].

#### *1.4. Research Objective*

In this study, the objective is to progress the study of lithium-ion battery performance based on battery SOH and RUL prognostics. To do so, we propose an entropy-based Reinforcement Learning model, predict the next-cycle battery capacity, and compare the numerical results from the proposed entropy-based RL models to those from two other data-driven methods—namely, ARIMA and LSTM—which are both constructed based on the same input variable (i.e., permutation entropy of voltage sequences at each cycle). Permutation entropy of the battery discharge voltage, as well as the previous battery capacities, are given to these models as input variables. Finally, evaluation metrics such as MSE, MAE, and RMSE are applied to the proposed methods to compare the observed and predicted battery capacities.

Based on Figure 1, the remainder of this work consists of the following sections. First, battery data is prepared and provided for the study. The data is then analyzed from different points of view. Based on the data analysis, various models are proposed for lithium-ion

battery performance using ML and DL techniques. We evaluate and compare the models in detail in the next sections. Finally, conclusions are presented in the last section.

**Figure 1.** Prediction system for the lithium-ion batteries.

#### **2. Related Work**

In the current literature, entropy-based predictive models for battery prognostics, as well as other predictive models, have been researched and tested. Table 1 illustrates a brief overview of some of the most relevant and recently published papers that use data-driven methods for lithium-ion battery prognostics.

**Table 1.** An overview of different approaches to lithium-ion battery prognostics.



The literature review reveals a research gap, which can be summarized as follows. Most of the research undertaken so far has relied on traditional Machine Learning and Deep Learning methods. However, the RL method is recognized as an area with room for exploration. Based on these findings, this paper is devoted to filling this gap in the research. LSTM and ARIMA methods are also studied as state-of-the-art models, which can be developed based on the entropy measures and compared with the RL method.

The main contribution of our study is the proposal of a Reinforcement Learning model based on the permutation entropy of the voltage sequences for predicting the nextcycle battery capacity. To the best of our knowledge, an RL model for lithium-ion battery prognostics, using entropy measures as the input, has not been previously tested in the literature. Additionally, we compare the numerical results from our proposed entropybased RL model with the results from the state-of-the-art models (i.e., ARIMA and LSMT), which are built based on entropy measures for a fair and reliable comparison.

#### **3. Data and Battery Specifications**

The datasets used in this study were retrieved from the Center for Advanced Life Cycle Engineering (CALCE) at the University of Maryland [28]. The studied batteries are graphite/LiCoO2 pouch cells with a capacity rating of 1500 mAh, weight of 30.3 gm, and dimensions of 3.4 × 84.8 × 50.1 mm, labeled as PL19, PL11, and PL09. Table 2 shows the number of cycles in each dataset.



Figure 2 illustrates the battery capacities over the number of cycles and indicates the decrease in capacities as the number of cycles increases. It can also be observed that in PL09 and PL19 capacities are discrete, while in PL11, they differ continuously.

Since the battery capacity and entropy were not observed in all cycles, we have estimated each unrecorded capacity value and its related entropy using the average of its previous and next known capacity and entropy value. By doing so, we have increased the number of data, and hence, the proposed models can be trained and tested more accurately.

Figures 3–5 indicate the resultant capacities and entropies after filling the missing data.

**Figure 2.** Capacity vs. Cycle for PL11, PL19, and PL09.

**Figure 3.** Capacity vs. Cycle (**left**) and Entropy vs. Cycle (**right**) for PL19.

**Figure 4.** Capacity vs. Cycle (**left**) and Entropy vs. Cycle (**right**) for PL11.

**Figure 5.** Capacity vs. Cycle (**left**) and Entropy vs. Cycle (**right**) for PL09.

#### **4. Methodology**

The mathematical notations used throughout this paper are summarized in Table 3.


In the following subsections, permutation entropy calculation and the proposed models will be discussed.

#### *4.1. Permutation Entropy*

To compute a *D* order permutation entropy for a one-dimensional set of time series data with *n* data points, the following steps are taken [29]. First, the data is partitioned into a matrix with *D* rows and *n* − (*D* − 1)*τ* columns, where *τ* is the delay time.

$$V = \begin{bmatrix} v(1) & v(1+\tau) & \dots & v(1+(D-1)\tau) \\ v(2) & v(2+\tau) & \dots & v(2+(D-1)\tau) \\ \vdots & \vdots & & \vdots \\ v(n) & v(n+\tau) & \dots & v(n+(D-1)\tau) \end{bmatrix} \tag{1}$$

After rebuilding the data, *π* is defined as the permutation pattern for *V* columns:

$$\pi = \{l\_0.l\_1\dots\dots l\_{D-1}\} = \{0.1\dots\dots D-1\} \tag{2}$$

The relative probability of each permutation in *π* is calculated as below:

$$P(\pi) = \frac{T}{n - D + 1} \tag{3}$$

where *T* is the number of times the permutation is found in the time series. Finally, the relative probabilities are used to compute the permutation entropy:

$$PE = -\sum\_{i=1}^{D1} P(\pi) \log\_2 P(\pi) \tag{4}$$

An algorithm for the permutation entropy computation is presented below.


Permutation entropy of the coarse-grained battery voltage is extracted, as in Figure 6. Despite the noise affecting the entropies, in PL11, the differences in the entropies are relatively small compared to the earlier cycles, while the deviations increase as the number of cycles increases. In PL19, the range of entropy is approximately constant over a different number of cycles; however, in PL09, they are completely random.

**Figure 6.** Entropy vs. Cycles for PL11, PL19, and PL09.

After data analysis, we split the data into train and test subsets. The proposed models utilize approximately 90% of the data for training purposes and take the rest for evaluation, as in Figure 7. The mechanism through which the training/test ration is selected is explained in the following sections.

**Figure 7.** Train–Test split schematic.

#### *4.2. Predictive Models*

The predictive models are presented in this section as follows.

#### 4.2.1. LSTM

Long Short-Term Memory, known simply as LSTM, is a framework for a recurrent neural network (RNN) which avoids the problem of long-term dependency. Unlike standard feedforward neural networks, LSTM has feedback connections, and hence, it can update

and store necessary information. It has been widely utilized in time series forecasting in different fields of science in recent years [30].

A unit LSTM cell consists of an input gate *it*, forget gate *ft*, and an output gate *ot*. Each gate receives the current input *xt*, the previous state *ht*−1, and the state *ct*−<sup>1</sup> of the cell's internal memory. *xt*, *ht*−1, and *ct*−<sup>1</sup> are passed through non-linear functions, which yield the updated *ct* and *ht* [31]. Considering *Wi*, *Wf* , *Wo*, *Wc* and *Ui*, *Uf* , *Uo*, *Uc* as the correspondig weights matrices and *bi*, *bf* , *bo*, *bc* as the bias vectors, each LSTM cell operates based on the following Equations.

$$\dot{\mathbf{u}}\_t = \sigma(\mathbf{x}\_t \mathbf{U}\_i + h\_{t-1} \mathbf{W}\_i + b\_i) \tag{5}$$

$$\overline{\mathcal{L}}\_t = \tan h (\mathbf{x}\_t \mathbf{L}\_c + h\_{t-1} \mathcal{W}\_c + b\_c) \tag{6}$$

$$f\_t = \sigma\left(\mathbf{x}\_t \mathbf{U}\_f + h\_{t-1} \mathbf{W}\_f + b\_f\right) \tag{7}$$

$$
\mathcal{L}\_t = f\_t \* \mathcal{L}\_{t-1} + i\_t \* \tilde{\mathcal{L}}\_t \tag{8}
$$

$$\rho\_t = \sigma(\mathbf{x}\_t \mathbf{U}\_o + h\_{t-1} \mathbf{W}\_o + b\_o) \tag{9}$$

$$h\_l = \tan h(\mathcal{c}\_l) \* o\_l \tag{10}$$

In this study, all three gates take permutation entropy of the battery voltage at cycle *t* and the battery capacity at cycle *t* − 1 as their input variables, *xt* and *ct*−1, and output the estimated battery capacity, *y*ˆ, for the given inputs as shown in Figure 8. Furthermore, an algorithm is presented for the proposed LSTM model.

**Figure 8.** Schematic of a unit LSTM cell.

#### **Algorithm 2:** LSTM

**Input**: *<sup>x</sup>* = {*PE*1. *PE*2. ....*PEn*}: Permutation Entropy of Battery Voltage and *ct*−1; **Output**: *y*ˆ = {*Capacity*1.*Capacity*2. .... *Capacityn*}: Battery Capacity; **for** *t* in range(epoch) do **Step1** Calculate *it* **Step2** Determine *<sup>c</sup>*7*<sup>t</sup>* **Step3** Calculate *ft* **Step4** Update *ct* **Step5** Calculate *ot* **Step6** Update *ht* **Step7** Determine the output *y*ˆ = LSTMforward(*x*) **Step8** Compute the loss function as Equations (20)–(22) **end**

#### 4.2.2. ARIMA

The Autoregressive Integrated Moving Average (ARIMA) method is proposed as a technique for statistical analysis in time series data. An ARIMA model is a combination of the autoregressive (AR) and moving average (MA) models. The ARIMA model can be explained according to three notations—*p*, *d*, and *q*—which define the type of the ARIMA model:


For AR (*p*), we have:

$$\mathcal{Y}\_t = \mathcal{Z}\_1 y\_{t-1} + \mathcal{Z}\_2 y\_{t-2} + \dots + \mathcal{Z}\_p y\_{t-p} + \varepsilon\_t \tag{11}$$

MA (*q*) can be described as follows:

$$
\hat{y}\_t = \varepsilon\_t - \theta\_1 \varepsilon\_{t-1} - \theta\_2 \varepsilon\_{t-2} - \dots - \theta\_q \varepsilon\_{t-q} \tag{12}
$$

ARMA (*p*.*q*) is a combination of AR (*p*) and MA (*q*), and is described as below:

$$\mathcal{Y}\_t = \mathcal{Q}\_1 y\_{t-1} + \dots + \mathcal{Q}\_p y\_{t-p} + \varepsilon\_t - \theta\_1 \varepsilon\_{t-1} - \dots - \theta\_q \varepsilon\_{t-q} \tag{13}$$

where *yt* and *y*ˆ*t*, respectively, are the observed and estimated values; ∅ and *θ*, respectively, are coefficients; and *ε<sup>t</sup>* is a normal white noise process with zero mean.

ARIMA is an advanced version of ARMA, which also works well for non-stationary time series data. To convert the non-stationary to stationary data, a data transformation is needed using a *d*-order difference equation [32]. Consequently, ARIMA (*p*.*d*.*q*) can be described as Equation (14).

$$
\hat{w}\_t = \mathcal{Q}\_1 w\_{t-1} + \dots + \mathcal{Q}\_p w\_{t-p} + \varepsilon\_t - \theta\_1 \varepsilon\_{t-1} \dots - \theta\_q \varepsilon\_{t-q} \tag{14}
$$

where *wt* <sup>=</sup> <sup>∇</sup>*dyt* and <sup>∇</sup> is the gradient operator. When *<sup>d</sup>* <sup>=</sup> 0, Equation (14) is the same as Equation (13) and, thus, ARIMA acts the same as ARMA. *p* and *q* are initialized using the autocorrelation function (ACF) and partial autocorrelation function (PAFC).

AFC measures the average correlation between data points in a time series and previous values of the series measured for different lag lengths. PACF is the same as ACF, except that each correlation controls for any correlation between observations of a shorter lag length [32].

Figure 9 demonstrates the ARIMA framework from the input data stage through the prediction stage.

**Figure 9.** ARIMA framework.

In this study, an ARIMA model is proposed to predict future battery capacities. Since we are working with a non-stationary time series, we have made a data transformation with *d* = 1. *p* and *q*, respectively, are set to 5 and 0, and thus, predictions were made with ARIMA (5.1.0). The rationale behind choosing the order of the ARIMA model is as follows. We compare the results from a range of non-negative integers, *p* = [1, 10] (extracted from the existing literature), and select the optimal number of time lags for the autoregressive model, which results in minimal errors compared to other orders in that range. The results from the optimal model are displayed and reported here.

There is a battery voltage sequence at each cycle (i.e., a time series of voltages at each cycle). We first compute the permutation entropy of each voltage sequence according to the corresponding algorithm; then, we use the time series of the permutation entropy measures (i.e., one entropy measure at each cycle) as an input in the ARIMA model, compare them with the deviations in the battery capacities, and predict the next-cycle battery capacity as an output of the model.

An algorithm for the ARIMA model is presented as follows.


4.2.3. Reinforcement Learning

Reinforcement Learning (RL) is a type of multi-layered neural network, and has become a focus of research in modern artificial intelligence. The concept is based on rewarding or punishing an agent's performance in a specific environment. A state is a description of the environment made to provide the necessary information for the agent to decide at each time step. For each and every state *s*, the agent has a number of selecting actions *a* to make decisions from. A policy is required, based on a cost function, to map each state to the optimal action with the consideration of maximizing its reward function during the episode [33].

Reinforcement Learning has real-life applications in various fields such as driving cars, landing rockets, trading and finance, diagnosing patients, and so on. This Deep Learning technique differs from supervised learning, as it does not require correct sets of actions and labeled input/output pairs [34]. Instead, the goal is to find a balance between exploration and exploitation. Figure 10 illustrates the schematic of a general Reinforcement Learning structure and its Equations are described as follows.

**Figure 10.** Reinforcement Learning Schematic.

*at* ∼ *π*(*at*|*st*) (15)

$$s\_{t+1} \sim \, \_{sstate}(s\_{t+1} \vert s\_{t}.a\_{t}) \tag{16}$$

$$r\_{t+1} = f\_{reward}(s\_t. \ a\_t. s\_{t+1}) \tag{17}$$

$$R = \sum\_{t=0}^{\infty} \gamma^t r\_{t+1} \tag{18}$$

$$Q\_{s\_l, a\_l}^{new} = Q\_{s\_r, a\_l}^{old} + a \left( \overbrace{r\_t + \gamma \max Q\_{s\_{t+1}, a}^{next \text{ s-a}}}^{Target} - \overbrace{Q\_{s\_l, a\_l}^{Q}}^{Prediction} \right) \tag{19}$$

In this study, we have considered the permutation entropy of the battery voltage as the states and the capacities as the actions, which should be taken at each state based on the given entropy. An algorithm for the RL model is presented in the following.

**Algorithm 4:** Reinforcement Learning


The hyperparameters of the proposed models define how they are structured. Optimal hyperparameters are approximated so that the loss is reduced. In other words, we explore various model architectures and search for the optimal values in the hyperparameter space to minimize the resulting performance metrics; for instance, Mean Squared Error. For this purpose, in the three models, grid search is used for tuning the hyperparameters and achieving reliable comparisons between the numerical results from the models. A model is built for each possible combination of all of the hyperparameter values; next, the models are evaluated based on the performance metrics, and then the architecture which produces the best results is selected. The results and findings are reported in the following section.

#### **5. Results and Findings**

The numerical results and findings are presented in this section as follows.

#### *5.1. Performance Measures*

To evaluate the performance of the proposed models, we present the observed and predicted battery capacities for ARIMA and LSTM models and the reward and loss functions obtained from the RL model. Furthermore, we compare the observed and predicted battery capacities gained from each of these models using three performance metrics [35] as shown below:

Mean Squared Error (MSE):

$$\text{MSE} = \frac{1}{n} \sum\_{t=1}^{n} (\mathcal{y}\_t - \mathcal{y}\_t)^2 \tag{20}$$

Mean Absolute Error (MAE):

$$\text{MAE} = \frac{1}{n} \sum\_{t=1}^{n} |y\_t - \hat{y}\_t| \tag{21}$$

Root Mean Squared Error (RMSE):

$$\text{RMSE} = \sqrt{\text{MSE}} = \sqrt{\frac{1}{n} \sum\_{t=1}^{n} (y\_t - \hat{y}\_t)^2} \tag{22}$$

where *yt* and *y*ˆ*t*, respectively, are the observed and predicted capacity at cycle *t*, and *n* is the number of test data.

#### *5.2. Numerical Results*

The observed and predicted battery capacities results from ARIMA and LSTM models are shown in Figures 11–13. Based on the graphs obtained, it can be seen that in all three datasets the ARIMA model predictions are following the trends in the test data, and so, yields better results as compared to the LSTM model for predicting the time series of battery capacities.

**Figure 11.** Train, test, and predicted data results from ARIMA and LSTM models for PL19.

**Figure 12.** Train, test, and predicted data results from ARIMA and LSTM models for PL11.

**Figure 13.** Train, test, and predicted data results from ARIMA and LSTM models for PL09.

The early battery-life prediction, which includes a prediction of the battery cycles at earlier cycles, is performed, and the results are displayed in Figures 14–16. It is observed that the deviation between the predicted capacities and the actual capacities are not significant, indicating that the proposed ARIMA and LSTM models are capable of predicting battery capacities at earlier cycles.

**Figure 14.** Train, test, and predicted data results from ARIMA and LSTM models for PL19.

**Figure 15.** Train, test, and predicted data results from ARIMA and LSTM models for PL11.

**Figure 16.** Train, test, and predicted data results from ARIMA and LSTM models for PL09.

In the RL model, as demonstrated in Figure 17, the reward values have an impressive increase and immediately become stable with some noise. The loss values increase at first; however, after approximately 250 epochs, they decline to 0, which verifies the procedure of Reinforcement Learning.

To find the best data split ratio, our proposed RL approach is initially trained using shuffled datasets with five different training ratios (70%, 75%, 80%, 85%, and 90%). Afterwards, Mean Squared Error (MSE) is utilized as a loss function to evaluate the obtained results. Based on Table 4, the best accuracy is gained by using 90% of each dataset for training purposes and using the rest for the testing process (Figure 18). Finally, this ratio is applied to training the other two models (LSTM and ARIMA). To save space, the results from the LSTM and ARIMA models are not reported here. The results from the other two models are consistent with those from RL (i.e., the best training ratio of 10%).

**Figure 17.** Reward and Loss Function (RL model).



**Figure 18.** Finding the best Train–Test Split.

#### *5.3. Comparisons*

Tables 5–7 represent a snapshot comparison of the aforesaid models for the PL19, PL11, and PL09 datasets, respectively. As the results show, in all datasets, ARIMA slightly surpasses the LSTM and RL models since it results in the smallest MSE, MAE, and RMSE values. However, the differences are not significant, and for PL19 and PL11, ARIMA and RL yield approximately the same values of performance measures. It is concluded that LSTM and RL also result in minor errors.

**Table 5.** MSE, MAE, and RMSE values for the predictive models (PL19).



**Table 6.** MSE, MAE, and RMSE values for the predictive models (PL11).

**Table 7.** MSE, MAE, and RMSE values for the predictive models (PL09).


From Tables 5–7, it is observed that the ARIMA model yields smaller errors compared to the LSTM model. ARIMA, which is a mean-reverting process, has the ability to predict battery capacities with smaller deviations. However, the LSTM model—which is a recurrent network—attempts to avoid the long-term dependency by storing only necessary information, and thus, it is unable to probabilistically exclude the input (i.e., previous permutation entropy of battery voltage sequences) and the recurrent connections to the units of the network from the activation and weight updates while the model is being trained. Consequently, the deviations between the actual battery capacities and the predicted capacities resulting from the LSTM model are greater than those resulting from the ARIMA model. The results displayed on Figures 11–13 are consistent with the Tables.

#### **6. Conclusions**

In lithium-ion battery applications, failures in the system can be minimized by performing prognostics and health management. Data-driven methods are one way of doing so, and identify the optimal replacement intervals or the optimal time for changing the battery in an appropriate manner. This paper presents three different models (LSTM, ARIMA, and RL), which all are built based on the permutation entropies of the battery voltage sequences, for next-cycle battery capacity prediction using the status of the previous states. In various data conditions, different models may be required; having a collection of models, even for the same purpose, can be useful. In addition to accurate prediction of battery capacities based on the ARIMA model, it is shown that the LSTM and the proposed entropy-based RL models have similar performance and both result in small errors.

**Author Contributions:** Conceptualization, A.N.; methodology, A.N.; software and coding, M.A.S. and A.N.; validation, A.N. and M.A.S.; formal analysis, M.A.S. and A.N.; model and algorithm design, M.A.S. and A.N.; investigation, A.N.; resources, A.N.; data curation, A.N.; writing—original draft preparation, M.A.S. and A.N.; writing—review and editing, A.N. and M.A.S.; visualization, M.A.S. and A.N.; supervision, A.N. and T.S.D.; project administration, A.N. and T.S.D. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** The data presented in this study are available on request from Dr. Alireza Namdari.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**

