Next Article in Journal
Application of the Z-Information-Based Scenarios for Energy Transition Policy Development
Previous Article in Journal
An Electricity Market Model with Intermittent Power
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Construction Method and Prediction Framework of Periodic Time Series: Application to State of Health Prediction of Lithium-Ion Batteries

School of Electrical and Control Engineering, North University of China, Taiyuan 030051, China
*
Author to whom correspondence should be addressed.
Energies 2025, 18(6), 1438; https://doi.org/10.3390/en18061438
Submission received: 22 February 2025 / Revised: 10 March 2025 / Accepted: 13 March 2025 / Published: 14 March 2025

Abstract

:
Due to the time property of natural phenomena and human activities, time series are very common in our lives. The analysis and study of time series can help us to better understand the world, predict the future and make scientific decisions. Focusing on time series prediction, in this paper we propose a method of constructing non-periodic time series into periodic time series and design a framework for time series prediction based on the constructed periodic time series. The proposed construction method and prediction framework for the periodic time series are then applied to predict the state of health (SOH) of lithium-ion (Li-ion) batteries. The effectiveness of the proposed approach is verified and evaluated on publicly available datasets from the National Aeronautics and Space Administration (NASA), Ames Prognostics Center of Excellence (PCoE), and Center for Advanced Life Cycle Engineering (CALCE) of University of Maryland. The experimental results show that the early SOH prediction of Li-ion batteries can be improved by at least one order of magnitude on both the NASA and CALCE battery datasets when using the method proposed in this paper.

1. Introduction

A time series is a sequence of data points in which the values of the same statistical indicator are arranged in chronological order of their occurrence. Time series can reflect the development and change of a phenomenon, thing, or process in the time dimension. Time series data have a wide range of applications in various fields, including finance, meteorology, transportation, engineering, medicine, and more. The universality of time series makes their analysis and study of great significance. Through the analysis of time series, phenomena such as trend, seasonality, cyclicity, and randomness can be revealed, providing powerful support for decision-making, prediction, and scientific research.
Analyses and tasks for time series can generally be divided into five categories: (1) trend analysis [1] determines the long-term trend of time series data, including determining whether the data show an upward or downward trend or remain stable; (2) seasonality analysis [2] identifies the presence of seasonal patterns in a time series, i.e., a pattern of data recurring over a specific time period; (3) prediction of future values [3] models and predicts future values of time series using historical time series data; (4) anomaly detection [4] identifies outliers or unusual patterns in a time series that do not match normal patterns; and (5) cyclicality analysis [5] determines whether there are any cyclical variations in the time series along with the length and magnitude of these cycles. All five of these categories of tasks have important practical value; for example, trend analysis helps to understand the overall direction of data development and provides a basis for long-term planning and decision-making. A typical example is that an enterprise can analyze the trend of sales data to decide whether to expand production or open up new markets.
This paper considers the third category of tasks, i.e., prediction of time series. This can aid in better understanding and grasping the laws that underlie development, helping to make more scientific and rational decisions, improve the efficiency of resource utilization, reduce risks and uncertainties, and promote the sustainable development of society. From the point of view of realization principles, the prediction methods of time series can be broadly classified into three categories: classical traditional algorithms (e.g., the autoregressive integrated moving average (ARIMA) model [6], the Holt–Winters seasonal method [7,8]), machine learning methods (e.g., LightGBM [9], XGBoost [10]), and deep learning methods (e.g., recurrent neural network (RNN) [11], long short-term memory (LSTM) [12], WaveNet [13], transformer [14]). Each of the three types of methods has its own advantages and disadvantages; for instance, traditional algorithms have good interpretability but poor generalization ability, while machine and deep learning methods have high learning freedom but also require large amounts of data. Thus, the selection of time series prediction methods in practical applications needs to be based on the actual scenario.
In order to improve the prediction accuracy of time series, two paths ma be followed: first, the prediction models/methods can be improved; second, the time series data can be utilized more effectively. As mentioned earlier, periodicity is an important feature of time series, and the rational use of periodicity can effectively improve the prediction accuracy of time series. Based on this, the prediction of periodic time series has been studied by many scholars. For example, ref. [15] proposed a multiseries featural LSTM based on a template-matching method used to extract specific periodic characteristics and restack the 1D time series into multiseries data, followed by period correction to reduce the iteration errors. Chen et al. proposed a periodicity-based parallel time series prediction algorithm for large-scale time series data implemented in the Apache Spark cloud computing environment [16]. Wang et al. proposed an approach based on LSTM networks to predict periodic energy consumption, which they used to predict the performance of a cooling system in comparison with the autoregressive moving average (ARMA) model and a backpropagation neural network (BPNN) [17].
Although the periodicity of time series can contain a great deal of useful information and has many benefits for the prediction of time series, not all time series are significantly periodic. In order to deal with this problem, we propose a method for constructing time series that are not significantly periodic into time series with periodicity through a simple method. Then, we accordingly propose a time series prediction framework adapted to the constructed periodic time series. Li-ion batteries have become widely used in various aspects of life. Battery SOH is a pivotal indicator, and its accurate prediction is essential for the safe utilization, management, and maintenance of Li-ion batteries. However, the capacity decline and power decrease that occur during the use of such batteries inevitably affect their safety performance. Considering that capacity degradation data of Li-ion batteries typically consist of non-periodic time series and that there are numerous open-source datasets available for Li-ion batteries, the proposed method for constructing periodic time series and the accompanying prediction framework are applied to SOH prediction for Li-ion batteries based on the NASA and CALCE battery datasets. Many works have used the degradation data of Li-ion batteries as time series for prediction tasks [18,19,20,21,22,23]. For example, Lin et al. proposed a time series model for battery degradation paths resembling experimental data on cycle aging to produce confidence intervals for battery’s useful life [23]. However, these works are dedicated to the development or design of advanced prediction models or advanced time series analysis methods for predicting the health indicators of Li-ion batteries. Unlike these previous studies, we are not attempting to improve prediction accuracy by using different prediction models or methods; rather, our concern is with utilizing time series data in a more effective and simple way.
By combining the proposed method for constructing long-term periodic time series with the corresponding prediction framework, this paper provides a way to predict periodic time series for non-periodic time series data. To this end, we propose using the prediction method of long-term time series for short-term time series. Although a number of frameworks have been proposed for the health management of Li-ion batteries [24,25], such as the cyber hierarchy and interactional network (CHAIN) [26,27], the construction method and prediction framework for periodic time series proposed in this paper are not limited to the field of Li-ion batteries or the field of new energy, and has a wider scope of application.
The main contributions of this paper are as follows: (1) a method for constructing periodic long-term time series based on non-periodic short-term time series; (2) a prediction framework for the constructed periodic time series; (3) a method for fusing prediction results from the two prediction models; and (4) application of the proposed construction method, prediction framework, and fusion method for periodic time series to SOH prediction of Li-ion batteries based on two public datasets.
The rest of this paper is organized as follows: Section 2 describes the proposed method for constructing periodic time series; the proposed prediction framework based on the constructed periodic time series is shown in Section 3; in Section 4, we apply the proposed construction method and prediction framework for periodic time series to the task of predicting the SOH of Li-ion batteries based on two public datasets; finally, we conclude this paper and discuss possible future directions in Section 5.

2. Construction of Periodic Time Series

Although time series are widespread in many areas of human production and life, not all time series are clearly cyclical. For instance, time series representing the capacity of Li-ion batteries have a clear downward trend over time and do not possess clear periodicity, which leads to the inapplicability of numerous theories, techniques, and methods for periodic time series. To solve this problem, a method for converting non-periodic time series into periodic time series is proposed in this section.
In reality, a soldier will usually have his own personality, but when a group of soldiers form the army, it will embody extreme discipline, which will result in extreme combat power, producing an effect in which the whole is greater than the sum of the parts; inspired by this, time series without periodicity (individual soldiers) can be constructed in a reasonable way (forming an army) into time series with periodicity (discipline). The method proposed in this paper for constructing a non-periodic time series into a periodic time series is shown in Figure 1.
In Figure 1, there are four non-periodic time series, labeled Time Series 1–4, which all have degenerate properties. A long-term time series can be constructed by connecting them directly at the beginning and end, after which the constructed long-term time series has a distinct periodicity. It is clear that the method proposed in this section is also a method for constructing short-term time series into long-term time series, which makes the prediction models/methods applicable to periodic long-term time series also available for short-term time series, thereby extending the applicability of methods for periodic long-term time series. It should be noted that although the proposed construction method of periodic time series is illustrated in Figure 1 using time series with degenerate properties as an example, the proposed method is applicable beyond such time series. The method proposed in this section only requires that the length of the original time series be close, and has no requirement as to the trend or smoothness of the time series. As a result, the proposed method has a wide range of applicability. In addition, the connected time series are required to be time series generated by the same or the same class of objects, which ensures that the connected time series have a certain correlation or similarity.
As shown in Figure 1, the idea behind the time series construction method proposed in this paper is very simple and easy to understand. However, there are three issues that need to be considered in practical applications:
1.
How many time series should be chosen to form a periodic long-term time series? (Quantitative issue)
2.
Which original time series should be selected to form the long-term time series? (Selection issue)
3.
For the selected time series, how should they be connected? (Sorting issue)
For the quantitative issue, the key consideration is the method used to process the time series. For example, if the method employed to deal with time series is one that applies to long-term time series (e.g., Informer [28], TimesNet [29], SegRNN [30], PatchMixer [31]), then the selected original time series can be larger. Of course, the length of the original time series itself has to be taken into account when considering this issue. The main criteria can be based on the length of the constructed long-term time series. For the same method or model dealing with long-term time series, if the length of the original time series is short, then more time series need to be selected; conversely, if the length of the original time series is inherently longer, then fewer time series can be selected.
For the selection issue, on the basis of determining the number of original time series selected, a natural idea is the stochastic selection method, i.e., a corresponding number of original time series are randomly selected from a set of original time series. Specifically, all of the original time series in the dataset need to be numbered, after which a corresponding number of a group of random numbers is generated by the random number generation algorithm. On this basis, the original time series corresponding to the generated random numbers is selected from the dataset to complete the selection of the original time series. Another method of selecting original time series is based on the correlation, i.e., the number of original time series with the strongest correlation to the time series under study is selected. Logically, the correlation selection method seems to be more reasonable; however, randomness also plays an unexpected role in many cases, which may make it reasonable to use the stochastic selection method as well. Thus, we study both of these selection methods in detail through the practical application scenarios in Section 4.
For the sorting issue, there are two approaches. First, the selected original time series can be sorted randomly, with the tail of one original time series directly connected to the head of the next original time series. We refer to this as the stochastic sorting method. A second way of arranging the selected corresponding number of original time series is to sort them according to their correlation with the original time series to be studied, i.e., to connect them after arranging them according to the correlation from lowest to highest or highest to lowest. We refer to this as the correlation sorting method. A simple schematic diagram of the correlation sorting method is shown in Figure 2.
As shown in Figure 2, we want to predict Time Series 5, i.e., by obtaining the future unknown values (red dotted line) of Time Series 5 based on the Time Series Segment 5 (red solid line), where the cyan pentagram represents the prediction starting point. Assuming that three original time series can be selected for predicting Time Series 5 based on the prediction model/method used, the correlation selection method can be used to choose the three original time series from the set of original time series that have the highest correlation with Time Series Segment 5 (note that this is not Time Series 5, as only the value of the Time Series Segment 5 is currently known), i.e., Time Series 1, 3, and 4. Finally, according to the correlation of Time Series Segment 5 with Time Series Segments 1, 3, and 4, from low to high, Time Series 1, 3 and 4 are connected with Time Series 4, 1, 3 in sorted order and connected with Time Series Segment 5 to form the long-term time series shown in Figure 2.
In summary, by addressing the three main issues, i.e., quantity, selection, and sorting, the original short-term time series can be converted into a long-term time series with periodicity. The complete process is shown in Figure 3. In Figure 3, the three stages correspond to the three issues mentioned in Section 2. The original data consist of the original time series provided by the prediction task, and all the original data make up the original dataset, while the constructed data consist of the periodic time series generated using the time series construction method proposed in Section 2 based on the original time series. Together, all the constructed data make up the constructed dataset. It should be noted that in the selection stage, the correlation selection method only selects the original time series based on the correlation with the time series under study, while in the sorting stage the correlation sort method is used to sort and connect the selected original time series according to their correlation with the time series under study, either from high to low or from low to high. Thus, although both the correlation selection method and correlation sorting method are based on the correlation with the time series under study, their purpose and operation are not the same.
Although this paper focuses on the problem of time series prediction, we believe that the method of constructing periodic long-term time series proposed in this section can also be useful for accomplishing other tasks involving time series. The following Section 3 presents the proposed framework for predicting the periodic long-term time series constructed in this section.

3. Prediction Framework for Periodic Time Series

The previous Section 2 proposed a method for converting non-periodic short-term time series into periodic long-term time series. Based on the resulted periodic long-term time series, a prediction framework for periodic time series is proposed in this section, which is shown in Figure 4 and illustrated with a data-based method.
The existing work on data-based time series prediction can be summarized in the part contained in the black dashed box in Figure 4. This involves first training the prediction model (Prediction Model I, PM I) using the data in original dataset. The original data are fed into the trained PM I, which provides the corresponding prediction value (Prediction Value I, PV I). Here, PM I can be a model obtained based on data training or a model designed on the basis of some mechanism.
The proposed prediction framework is based on improvement of the traditional prediction framework. As shown in Figure 4, the method in Section 2 is first used to construct numerous periodic long-term time series from the original time series and compose a long-term time series dataset. This dataset is used to train the models denoted Prediction Model II (PM II) and Prediction Model III (PM III). PM II specializes in the prediction of long-term time series, while PM III specializes in the prediction of periodic time series. On this basis, the constructed periodic long-term time series are input to PM II and PM III, then the predicted values of PM II and PM III (Prediction Value II, PV II and Prediction Value III, PV III) are fused through Fusion I to obtain the prediction value, i.e., Fusion Prediction Value I (FPV I). Finally, fusion of FPV I and PV I provides the final predicted value through Fusion II, i.e., Fusion Prediction Value II (FPV II). It should be noted that Fusion I and Fusion II can be either neural networks or traditional fusion methods. The proposed prediction framework does not mandate which methods must be used as Fusion I and Fusion II, nor does it restrict the methods of designing weights in data fusion; the specific data fusion methods to be adopted as Fusion I and Fusion II are closely related to the actual usage scenarios and the method used to design the weights, and as such there is no certain rule.
As can be seen from the whole workflow of the prediction framework, we do not discard the existing traditional time series prediction framework, nor do we waste the existing time series prediction methods; rather, we integrate them into our prediction framework as part of it. Theoretically, the prediction accuracy of the prediction framework proposed in this paper must be no weaker (and usually better) than that of the traditional prediction framework, which is easy to understand; the prediction results of the traditional prediction framework can be obtained by simply setting the weights of PV I in Fusion I and Fusion II to 1, whereas FPV II will outperform PV I in terms of accuracy so long as PV II and PV III play an active role in FPV II. Put more simply, the traditional prediction framework is a special case of the proposed prediction framework, where the weight of PV I in Fusion I and Fusion II is set to 1. In addition, a number of excellent prediction models/methods specialize in both periodic time series and long-term time series; for such models/methods, the parts contained in the brown rounded matrix in Figure 4 can be replaced with the component shown in Figure 5. In short, PM II and PM III are replaced by Prediction Model IV (PM IV), while FPV I is consistent with Prediction Value IV (PV IV).
Of course, the advantage in prediction accuracy generated by our proposed prediction framework comes at a cost, namely, the use of multiple prediction models and the use of one or two fusion techniques. To reduce this cost, simple predictive models can be chosen as PM II and PM III, while simpler fusion techniques (Fusion I and Fusion II) may be recommended as well. Under this guideline, two effects can be produced: (1) In the first scenario, all prediction models (PM I, PM II, PM III) in the prediction framework use simple prediction models, and achieve complex advanced prediction results by combining simple prediction models; (2) In the second scenario, simple models/methods are selected in the prediction framework for PM II and PM III as well as for Fusion I and Fusion II, while PM I is a complex/advanced prediction model. In this case, the application of the prediction framework can further improve the prediction accuracy of the complex/advanced prediction model PM I only in a simple way.

4. SOH Prediction of Li-Ion Batteries

4.1. Preliminaries

4.1.1. Dataset Description

To verify the feasibility and generalization of the construction method and prediction framework of periodic time series proposed in this paper, the SOH of Li-ion batteries is predicted in this section based on the capacity degradation data of Li-ion batteries provided in the NASA and CALCE datasets.
All batteries are charged and discharged cyclically using a charge/discharge strategy of constant current charging, constant voltage charging, and constant current discharging. The NASA battery dataset is widely used for health state prediction of Li-ion batteries. The capacity degradation data of three of these batteries were selected for the test. Each group contains four batteries of the same type, with the capacity degradation curves shown in Figure 6. The specific charge and discharge details of the selected batteries from the NASA dataset are shown in Table 1.
To illustrate the generalizability of the proposed method, batteries CS2_35, CS2_36, CS2_37, and CS2_38 from the CALCE battery dataset were also selected as experimental subjects. Unlike the batteries in the NASA dataset, the selected batteries from the CALCE dataset were all measured under the same experimental conditions. The reason for this is to show that the degree of battery degradation is not identical even when the same type of battery operates normally in the same environment. The capacity degradation curve is shown in Figure 7. The specific charging and discharging details of the selected CALCE batteries are also shown in Table 1.

4.1.2. Correlation

As described in Section 2, one way of sorting the original time series is based on their correlation with the time series to be studied. In this section, the Pearson correlation coefficient is used to characterize the strength of the correlation between the degradation data of different Li-ion batteries, i.e.,
ρ x , y = cov ( x , y ) σ x σ y ,
where x and y are two variables and cov ( x , y ) and σ x , σ y are the covariance and standard deviation between x and y , respectively.
When the absolute value of the Pearson correlation coefficient is exactly 1, this indicates that all data points lie on the same line of the linear equation, while a correlation coefficient equal to 0 indicates that there is no linear relationship between the variables. If the coefficient is positive, this indicates that there is a linear positive correlation between the variables; on the contrary, if the correlation coefficient is a negative integer less than 0, this indicates an inversely proportional negative correlation between the variables.
Taking the NASA battery dataset as an example, we can divide the data into two groups according to the degradation mode: B0005-B0018 (red lines), and B0045-B0056 (blue lines), which can be seen in Figure 6. Because the length of the Li-ion battery degradation data is not consistent, the data for battery B0005 is taken as the test case. According to the length of these data, the first 40 sets of all the battery data are selected to perform the correlation analysis, and the results are shown in Figure 8.

4.1.3. Gated Recurrent Unit

In this section, gated recurrent unit (GRU) is used as the prediction model of Li-ion batteries. GRU is a variant of RNN that improves upon LSTM and can solve the problems such as the inability of long-term memory and gradient in backpropagation in RNNs. The update gate and reset gate are the two main gates of the GRU. The input gate and forget gate in LSTM are combined to form the update gate, which specifies how much of the previous memory is stored up to the current time step, while the reset gate determines how to merge the new input data with the old memory. The internal structure of the GRU is connected as shown in Figure 9. The calculation process can be described as follows:
r t = σ W r · x t + h t 1 + b r
z t = σ W z · h t 1 , x t + b z
h ˜ t = tanh W h r t h t 1 + W h x t + b h
h t = z t h t 1 + 1 z t h ˜ t
where r t , z t , h ˜ t , and h t are the reset gate state, update gate state, candidate hidden state, and hidden state, respectively, W r , W z and W h are the weights of the reset gate, update gate and candidate hidden state, respectively, and b r , b z and b h are the bias terms of the reset gate, update gate and candidate hidden state, respectively. The parameters of the GRU used in this section are shown in Table 2.

4.1.4. Evaluation Metrics

In this paper, four error assessment metrics are used to quantitatively assess the output of the prediction model, namely, the root mean square error (RMSE), mean absolute error (MAE), mean absolute percentage error (MAPE), and mean square error (MSE), which are defined as follows [32,33,34]:
R M S E = 1 N i = 1 N x i x ^ i 2
M A E = 1 N i = 1 N x i x ^ i
M A P E = 1 N i = 1 N x i x ^ i x i × 100
M S E = 1 N i = 1 N x i x ^ i 2
where x i denotes the true value of the SOH and x ^ i denotes its the predicted value. Smaller values indicate higher prediction accuracy; in particular, the MSE is used as the loss function for training the GRU.

4.2. SOH Prediction Based on the Constructed Periodic Time Series

The aging of Li-ion batteries is usually characterized by the decay of the available capacity, which is used to define the SOH:
S O H = C r C n
where C r and C n denote the current maximum available capacity and nominal capacity, respectively.
In this subsection, the method of constructing periodic time series proposed in Section 2 is applied to predicting the SOH of Li-ion batteries based on the capacity degradation data of Li-ion batteries provided by NASA and CALCE. As described in Section 2, there are two selection methods and two sorting method for the original time series, i.e., the stochastic selection method, correlation selection method, stochastic sorting method, and correlation sorting method. Thus, four strategies for constructing periodic time series can be obtained by combining different selection methods and sorting methods, as follows:
  • Strategy I: correlation selection method + correlation sorting method
  • Strategy II: correlation selection method + stochastic sorting method
  • Strategy III: stochastic selection method + correlation sorting method
  • Strategy IV: stochastic selection method + stochastic sorting method
where ‘+’ represents the meaning of the combination.
Taking the NASA battery dataset as an example, after selecting and sorting the original data using the corresponding strategy, we use the constructed data with a small portion of the data for battery B0005 as the training set and the remaining data of battery B0005 as the test set. Then, the GRU model is applied to predict the SOH of Li-ion batteries. The data division and prediction results are shown in Table 3 and Figure 10, where Normal represents the prediction result obtained directly from the original single time series.
From Table 3 and Figure 10, when the prediction starting point is more forward (e.g., cases 1, 3, 5, and 10 in Table 3), the proposed strategy comprehensively outperforms the traditional prediction model that uses only a single original data. The main reason for this is that the constructed data are cyclical in nature in the aggregate. When the prediction starting point is located after 34th cycle (20% of the total data), the traditional prediction strategy outperforms the results of Strategy III and Strategy IV, but is still weaker than Strategy I and Strategy II. From a qualitative point of view, the traditional prediction model is consistent with the logical observation that the prediction error decreases as the percentage of training data increases. For example, the RMSE drops from an initial 3.8502 to 0.012818, a decline of about 99%; however, throughout the experiments, the proposed strategies exhibit results that show the opposite trend to the traditional method. For instance, when the prediction starting point is moved back, the prediction accuracy starts to weaken instead. Taking points 18, 6, 7, and 5 of Strategy I as an example, we select a group of batteries with the same degradation trend as battery B0005, then combine, connect, and arrange them in ascending order according to the correlation. Here, the specific idea is to reduce the influence of the most relevant data on the model and reduce the model’s dependence on the training data. It can be seen that the RMSE increases from the initial 0.0045524 to 0.0096346 in the case of starting point 68 (located at 40% of the total data), which is an increase of 0.0050821, or 1.11 times. Similar phenomena were observed in the other experiments. The reason for this is that the periodicity of the constructed data causes the prediction model to use the starting point of the test set as the starting point of a new cycle, which results in a larger bias in the first few prediction phases, leading to a larger prediction error. On the other hand, by comparing the prediction results of the two time series (7-6-18-5 and 18-6-7-5), the method of connecting the original time series according to correlation from highest to lowest is better for SOH prediction of Li-ion batteries when the prediction starting point is very far ahead, such as in the case where the prediction starting point is 1 or 3; otherwise, the method of connecting the original time series according to correlation from lowest to highest is better.
Moreover, the poor performance results of Strategy III and Strategy IV shows that the correlation between data should be fully considered when choosing the data to be connected. Taking battery B0005 as an example, the battery capacity degradation curves shown in Figure 6 show that batteries B0006, B0007, and B0018 have similar degradation trends to battery B0005. This is further shown by the correlation heat map in Figure 8, demonstrating that battery B0005 has a strong correlation with batteries B0006, B0007, and B0018. This is due to these batteries being of the same type and being charged and discharged under similar experimental conditions. Therefore, when predicting the SOH of battery B0005, connecting the data of batteries B0006, B0007, and B0018 to that of battery B0005 would be likely to obtain better prediction results. In fact, the largest difference between Strategy I and the other strategies, including Strategy II and Strategy III, is that Strategy I utilizes correlation more fully, while the other strategies utilize only partial or no correlation. Thus, Strategy I exhibits higher prediction accuracy. For the same group of batteries, taking prediction starting point 3 as an example, the correlation descending ordering method, i.e., 7-6-18-5, provides the best prediction results. When the data from dissimilarity groups of batteries are mixed in, such as 7-6-56-5 (dissimilarity group I in Figure 10) or 7-6-56-18-5 (dissimilarity group II in Figure 10), the results are slightly inferior to the original strategy, even though they are still arranged according to the same rules. When the stochastic sorting method is used, the errors are even larger in comparison. At the same time, according to the prediction results for the case of prediction starting point 1, including the data of dissimilar batteries results in lower stability. In summary, the proposed method is more suitable overall for early SOH prediction of Li-ion batteries, i.e., cases where the prediction starting point is relatively further ahead. For the selection of connected data, the same group of data is prioritized more than the dissimilar group of data, and the correlation sorting method is superior to the stochastic sorting method.
The SOH prediction results for the CALCE battery dataset are shown in Table 4 and Figure 11, from which it can be seen that the prediction accuracy improves by less than 0.002 from the case of prediction starting point 1 to that of prediction starting point 354 (located at 40% of the total data), i.e., there is only a small improvement in prediction accuracy when the prediction starting point for the test data is very substantially delayed. The prediction results for the CALCE battery dataset show that the proposed method offers significant advantages in early SOH prediction of Li-ion batteries.
From Table 3 and Table 4, showing the results for the NASA and CALCE battery datasets, respectively, the early SOH prediction of Li-ion batteries can be improved by at least an order of magnitude using the method proposed in this paper, where early SOH prediction refers to a prediction starting point less than 10. Specifically, for the NASA battery dataset, the prediction accuracy can be improved by three orders of magnitude when the prediction start point is 1 and by two orders of magnitude when the prediction start point is 3 or 10. For the CALCE battery dataset, the prediction accuracy can be improved by three orders of magnitude when the prediction start point is 1 and by one order of magnitude when the prediction start point is 10.
On the other hand, according to the prediction results in Table 3 and Table 4 along with Figure 10 and Figure 11, Strategy I presents the best prediction accuracy for SOH prediction of Li-ion batteries among the four strategies on both the NASA and CALCE battery datasets, which shows that correlation can be helpful in improving the prediction accuracy. Therefore, the reasonable use of correlation between time series may be a direction towards improving prediction performance for time series data. The method of long-term time series construction proposed in this paper provides an idea utilizing this correlation.
To show the generality of the periodic time series construction method proposed in this paper, we also used LSTM instead of GRU for SOH prediction of Li-ion batteries based on the NASA and CALCE battery datasets. The LSTM parameters were the same as those for the GRU shown in Table 2. In the experiments, all prediction starting points were 10, and Strategy I was used for both datasets. The data division and prediction results are shown in Table 5 and Figure 12, from which it can be seen that GRU has higher prediction accuracy than LSTM for the same settings on both the NASA and CALCE battery datasets. This result is predictable, as GRU is intended as an improvement upon LSTM. More notably, comparing GRU and LSTM for Strategy I, it can be seen that LSTM can exceed the prediction accuracy of GRU by adopting the method for constructing periodic time series proposed in this paper. This demonstrates that the proposed method improves the prediction accuracy of time series by utilizing the time series data in a more effective way.

4.3. SOH Prediction Based on the Periodic Framework of Periodic Time Series

In this subsection, the SOH of Li-ion batteries is predicted based on the NASA battery dataset using the prediction framework for periodic time series proposed in Section 3. In particular, the parts contained in the brown rounded matrix in Figure 4 are replaced with the component shown in Figure 5, where the GRU model is used as Prediction Model I and Prediction Model IV. The approach used for Fusion II in this section is the stochastic weighted average method. The schematic diagram of Fusion II is shown in Figure 13.
As shown in Figure 13, Prediction Model I and Prediction Model IV respectively obtain Prediction Value I and Prediction Value IV, after which Fusion Prediction Value II is obtained based on the weighted average of Prediction Value I and Prediction Value IV, i.e.,
Fusion Prediction Value II = Prediction Value I + α × Prediction Value IV ,
where α is the weight to be determined. From another point of view, Equation (11) shows that Fusion Prediction Value II can be seen as a modification of Prediction Value I with the help of Prediction Value IV. In any case, for Fusion Prediction Value II, the key is how we obtain the weight α .
Taking battery B0006 as an example, to obtain the weight α when predicting battery B0006, denoted as α B 6 , we need to first study the case of predicting battery B0005, since batteries B0005 and B0006 belong to the same group (refer to Figure 6). The process of obtaining α B 6 from the data for battery B0005 is as follows:
1.
Use the GRU to predict the SOH of battery B0005 based on the original time series and constructed periodic time series, respectively, providing Prediction Value I and Prediction Value IV for battery B0005.
2.
Based on the real value for battery B0005 along with Prediction Value I and Prediction Value IV, it is easy to calculate the weight for battery B0005, denoted as α B 5 , where α B 5 = real value Prediction Value I Prediction Value IV . Taking the example of predicting 100 points, α B 5 can be expressed as
α B 5 = α B 5 ( 1 ) , α B 5 ( 2 ) , , α B 5 ( 100 ) .
3.
Divide α B 5 into five groups, i.e., 20 weight values for one group, to obtain α B 5 ( 1 ) , , α B 5 ( 20 ) , ⋯, α B 5 ( 81 ) , , α B 5 ( 100 ) .
4.
Obtain the maximum and minimum values for each group, i.e.,
α ̲ B 5 i = min α B 5 ( ( i 1 ) 20 + 1 ) , , α B 5 ( ( i 1 ) 20 + 20 ) , α ¯ B 5 i = max α B 5 ( ( i 1 ) 20 + 1 ) , , α B 5 ( ( i 1 ) 20 + 20 ) ,
where i = 1 , , 5 .
5.
Take 20 random values in α ̲ B 5 i , α ¯ B 5 i for all i { 1 , , 5 } to obtain the new weight:
α ˜ B 5 = α ˜ B 5 ( 1 ) , α ˜ B 5 ( 2 ) , , α ˜ B 5 ( 100 ) .
6.
By weighting α B 5 in (12) and α ˜ B 5 in (14), the weight applicable to battery B0006 can be obtained, i.e.,
α B 6 = 1 β α B 5 + β α ˜ B 5 ,
where β determines the stochasticity of α B 6 or the difference between α B 5 and α B 6 .
According to α B 6 and (11), the fusion prediction value for battery B0006 can be obtained. The SOH prediction results based on the prediction framework of periodic time series for battery B0006 are shown in Table 6 and Figure 14, where the prediction starting point is set as 68, i.e., 40% of the entire life cycle, and the weight α B 5 is obtained by fusing time series 7-6-18-5 and 5.
From Table 6 and Figure 14, it can be seen that the the fused prediction accuracy is higher than when using only the original data, which shows the usefulness of the proposed prediction framework for periodic time series. On the other hand, as β decreases, the fused prediction accuracy increases; at the same time, when calculating β , the prediction accuracy improves as the grouping becomes finer. Thus, for the NASA battery dataset, a smaller β should be chosen in order to improve the prediction accuracy.

5. Conclusions and Future Work

This paper proposes a method for constructing periodic time series from non-periodic time series along with a corresponding prediction framework, both of which are applied to predict the SOH of Li-ion batteries based on public datasets. The proposed method for constructing periodic long-term time series is simple and direct, showing strong capabilities and good potential to be exploited in the context of time series prediction. The proposed prediction framework is theoretically superior to traditional prediction frameworks, and can be utilized to achieve reasonably similar prediction accurate compared to advanced prediction models through combining simple prediction models. Alternatively, it can be used to further improve the prediction accuracy of advanced prediction models in a simple way, showing a wide range of applicability.
As an important feature of time series, we believe that periodicity can be helpful for time series analysis, which is one of the main motivations of this paper. This paper provides an experimental validation of this simple idea. Further works can be considered as follows: (1) determining the most reasonable range of the number of original time series used to construct the long-term time series, and further optimizing it to obtain the optimal number of original time series; (2) applications and validations of the constructed periodic long-term time series in other tasks involving time series, such as anomaly detection; (3) inverse use of the time series construction method proposed in this paper, i.e., to decompose time series that are inherently periodic, such as stock prices, obtaining non-periodic time series which can then be applied to the prediction framework proposed in this paper.

Author Contributions

Conceptualization, C.C. and J.W.; methodology, C.C. and G.X.; software, G.X. and C.J.; validation, C.C. and G.X.; formal analysis, G.X.; investigation, C.C.; resources, C.J.; data curation, C.J.; writing—original draft preparation, C.C., G.X. and C.J.; writing—review and editing, J.W.; visualization, G.X. and C.J.; supervision, J.W.; project administration, J.W.; funding acquisition, J.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Foundation of Shanxi Province under grant number 202403021211088. The APC was funded by 202403021211088.

Data Availability Statement

The data used in this paper are from open source datasets, including the NASA Ames Prognostics Center of Excellence “https://www.nasa.gov/intelligent-systems-division/discovery-and-systems-health/pcoe/pcoe-data-set-repository/ (accessed on 12 March 2025)“ and the Center for Advanced Life Cycle Engineering “https://calce.umd.edu/battery-data (accessed on 12 March 2025)”.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Harvey, A. Trend Analysis. In Wiley StatsRef: Statistics Reference Online; John Wiley & Sons, Ltd.: Hoboken, NJ, USA, 2016; pp. 1–21. [Google Scholar]
  2. Koenig-Lewis, N.; Bischoff, E.E. Seasonality research: The state of the art. Int. J. Tour. Res. 2005, 7, 201–219. [Google Scholar] [CrossRef]
  3. Sapankevych, N.I.; Sankar, R. Time series prediction using support vector machines: A survey. IEEE Comput. Intell. Mag. 2009, 4, 24–38. [Google Scholar] [CrossRef]
  4. Zamanzadeh Darban, Z.; Webb, G.I.; Pan, S.; Aggarwal, C.; Salehi, M. Deep Learning for Time Series Anomaly Detection: A Survey. ACM Comput. Surv. 2024, 57, 15. [Google Scholar] [CrossRef]
  5. Stratimirović, D.; Sarvan, D.; Miljković, V.; Blesić, S. Analysis of cyclical behavior in time series of stock market returns. Commun. Nonlinear Sci. Numer. Simul. 2018, 54, 21–33. [Google Scholar] [CrossRef]
  6. Shumway, R.H.; Stoffer, D.S.; Shumway, R.H.; Stoffer, D.S. ARIMA models. In Time Series Analysis and Its Applications: With R Examples; Springer: Cham, Switzerland, 2017; pp. 75–163. [Google Scholar]
  7. Holt, C.C. Forecasting seasonals and trends by exponentially weighted moving averages. Int. J. Forecast. 2004, 20, 5–10. [Google Scholar] [CrossRef]
  8. Winters, P.R. Forecasting sales by exponentially weighted moving averages. Manag. Sci. 1960, 6, 324–342. [Google Scholar] [CrossRef]
  9. Ke, G.; Meng, Q.; Finley, T.; Wang, T.; Chen, W.; Ma, W.; Ye, Q.; Liu, T.Y. Lightgbm: A highly efficient gradient boosting decision tree. In Advances in Neural Information Processing Systems; Neural Information Processing Systems Foundation, Inc. (NeurIPS): La Jolla, CA, USA, 2017; Volume 30. [Google Scholar]
  10. Chen, T.; Guestrin, C. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar]
  11. Medsker, L.; Jain, L.C. Recurrent Neural Networks: Design and Applications; CRC Press: Boca Raton, FL, USA, 1999. [Google Scholar]
  12. Hochreiter, S. Long Short-term Memory. In Neural Computation; MIT-Press: Cambridge, MA, USA, 1997. [Google Scholar]
  13. Van Den Oord, A.; Dieleman, S.; Zen, H.; Simonyan, K.; Vinyals, O.; Graves, A.; Kalchbrenner, N.; Senior, A.; Kavukcuoglu, K. Wavenet: A generative model for raw audio. arXiv 2016, arXiv:1609.03499. [Google Scholar]
  14. Vaswani, A. Attention is all you need. In Advances in Neural Information Processing Systems; Neural Information Processing Systems Foundation, Inc. (NeurIPS): La Jolla, CA, USA, 2017. [Google Scholar]
  15. Wang, T.; Leung, H.; Zhao, J.; Wang, W. Multiseries Featural LSTM for Partial Periodic Time-Series Prediction: A Case Study for Steel Industry. IEEE Trans. Instrum. Meas. 2020, 69, 5994–6003. [Google Scholar] [CrossRef]
  16. Chen, J.; Li, K.; Rong, H.; Bilal, K.; Li, K.; Yu, P.S. A periodicity-based parallel time series prediction algorithm in cloud computing environments. Inf. Sci. 2019, 496, 506–537. [Google Scholar] [CrossRef]
  17. Wang, J.; Du, Y.; Wang, J. LSTM based long-term energy consumption prediction with periodicity. Energy 2020, 197, 117197. [Google Scholar] [CrossRef]
  18. Aggab, T.; Avila, M.; Vrignat, P.; Kratz, F. Unifying Model-Based Prognosis With Learning-Based Time-Series Prediction Methods: Application to Li-Ion Battery. IEEE Syst. J. 2021, 15, 5245–5254. [Google Scholar] [CrossRef]
  19. Gu, X.; See, K.; Liu, Y.; Arshad, B.; Zhao, L.; Wang, Y. A time-series Wasserstein GAN method for state-of-charge estimation of lithium-ion batteries. J. Power Sources 2023, 581, 233472. [Google Scholar] [CrossRef]
  20. Wan, Z.; Kang, Y.; Ou, R.; Xue, S.; Xu, D.; Luo, X. Multi-step time series forecasting on the temperature of lithium-ion batteries. J. Energy Storage 2023, 64, 107092. [Google Scholar] [CrossRef]
  21. Zhang, W.; Li, X.; Li, X. Deep learning-based prognostic approach for lithium-ion batteries with adaptive time-series prediction and on-line validation. Measurement 2020, 164, 108052. [Google Scholar] [CrossRef]
  22. Jorge, I.; Mesbahi, T.; Samet, A.; Boné, R. Time Series Feature extraction for Lithium-Ion batteries State-Of-Health prediction. J. Energy Storage 2023, 59, 106436. [Google Scholar] [CrossRef]
  23. Lin, C.P.; Cabrera, J.; Yang, F.; Ling, M.H.; Tsui, K.L.; Bae, S.J. Battery state of health modeling and remaining useful life prediction through time series model. Appl. Energy 2020, 275, 115338. [Google Scholar] [CrossRef]
  24. Liu, X.; Zhang, L.; Yu, H.; Wang, J.; Li, J.; Yang, K.; Zhao, Y.; Wang, H.; Wu, B.; Brandon, N.P.; et al. Bridging Multiscale Characterization Technologies and Digital Modeling to Evaluate Lithium Battery Full Lifecycle. Adv. Energy Mater. 2022, 12, 2200889. [Google Scholar] [CrossRef]
  25. Liu, X.; Yang, K.; Zhang, L.; Wang, W.; Zhou, S.; Wu, B.; Xiong, M.; Yang, S.; Tan, R. A Fast Forward Prediction Framework for Energy Materials Design Based on Machine Learning Methods. Energy Mater. Adv. 2024, 5, 0131. [Google Scholar] [CrossRef]
  26. Yang, S.; He, R.; Zhang, Z.; Cao, Y.; Gao, X.; Liu, X. CHAIN: Cyber Hierarchy and Interactional Network Enabling Digital Solution for Battery Full-Lifespan Management. Matter 2020, 3, 27–41. [Google Scholar] [CrossRef]
  27. Wang, W.; Yang, K.; Zhang, L.; Zhou, S.; Ren, B.; Lu, Y.; Tan, R.; Zhu, T.; Ma, B.; Yang, S.; et al. An end-cloud collaboration approach for online state-of-health estimation of lithium-ion batteries based on multi-feature and transformer. J. Power Sources 2024, 608, 234669. [Google Scholar] [CrossRef]
  28. Zhou, H.; Zhang, S.; Peng, J.; Zhang, S.; Li, J.; Xiong, H.; Zhang, W. Informer: Beyond efficient transformer for long sequence time-series forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtually, 2–9 February 2021; Volume 35, pp. 11106–11115. [Google Scholar]
  29. Wu, H.; Hu, T.; Liu, Y.; Zhou, H.; Wang, J.; Long, M. TimesNet: Temporal 2d-variation modeling for general time series analysis. In Proceedings of the International Conference on Learning Representations, Virtually, 25–29 April 2022; pp. 1–23. [Google Scholar]
  30. Lin, S.; Lin, W.; Wu, W.; Zhao, F.; Mo, R.; Zhang, H. SegRNN: Segment recurrent neural network for long-term time series forecasting. arXiv 2023, arXiv:2308.11200. [Google Scholar]
  31. Gong, Z.; Tang, Y.; Liang, J. PatchMixer: A patch-mixing architecture for long-term time series forecasting. arXiv 2023, arXiv:2310.00655. [Google Scholar]
  32. Jia, C.; Tian, Y.; Shi, Y.; Jia, J.; Wen, J.; Zeng, J. State of health prediction of lithium-ion batteries based on bidirectional gated recurrent unit and transformer. Energy 2023, 285, 129401. [Google Scholar] [CrossRef]
  33. Tian, Y.; Wen, J.; Yang, Y.; Shi, Y.; Zeng, J. State-of-Health Prediction of Lithium-Ion Batteries Based on CNN-BiLSTM-AM. Batteries 2022, 8, 155. [Google Scholar] [CrossRef]
  34. Xia, G.; Jia, C.; Shi, Y.; Jia, J.; Pang, X.; Wen, J.; Zeng, J. Remaining useful life prediction of lithium-ion batteries by considering trend filtering segmentation under fuzzy information granulation. Energy 2025, 318, 134810. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of constructing periodic time series.
Figure 1. Schematic diagram of constructing periodic time series.
Energies 18 01438 g001
Figure 2. Schematic diagram of sorting time series.
Figure 2. Schematic diagram of sorting time series.
Energies 18 01438 g002
Figure 3. Flow chart of constructing periodic long-term time series.
Figure 3. Flow chart of constructing periodic long-term time series.
Energies 18 01438 g003
Figure 4. Prediction framework for periodic time series.
Figure 4. Prediction framework for periodic time series.
Energies 18 01438 g004
Figure 5. Replacement component for another periodic time series prediction framework.
Figure 5. Replacement component for another periodic time series prediction framework.
Energies 18 01438 g005
Figure 6. Battery capacity degradation curves of NASA battery dataset.
Figure 6. Battery capacity degradation curves of NASA battery dataset.
Energies 18 01438 g006
Figure 7. Battery capacity degradation curves of CALCE battery dataset.
Figure 7. Battery capacity degradation curves of CALCE battery dataset.
Energies 18 01438 g007
Figure 8. Heat map for the correlation of the first 40 sets of all battery data.
Figure 8. Heat map for the correlation of the first 40 sets of all battery data.
Energies 18 01438 g008
Figure 9. Internal structure of the GRU.
Figure 9. Internal structure of the GRU.
Energies 18 01438 g009
Figure 10. SOH prediction based on the constructed periodic time series for the NASA battery dataset: (a) SOH prediction results for prediction starting point 5, (b) SOH prediction results for prediction starting point 68, and (c) comparison of SOH prediction for the same group and dissimilar groups.
Figure 10. SOH prediction based on the constructed periodic time series for the NASA battery dataset: (a) SOH prediction results for prediction starting point 5, (b) SOH prediction results for prediction starting point 68, and (c) comparison of SOH prediction for the same group and dissimilar groups.
Energies 18 01438 g010
Figure 11. SOH prediction based on the constructed periodic time series for the CALCE battery dataset: (a) SOH prediction results for prediction starting point 354 and (b) SOH prediction results for different prediction starting points.
Figure 11. SOH prediction based on the constructed periodic time series for the CALCE battery dataset: (a) SOH prediction results for prediction starting point 354 and (b) SOH prediction results for different prediction starting points.
Energies 18 01438 g011
Figure 12. SOH prediction based on different prediction models: (a) SOH prediction results for NASA battery dataset and (b) SOH prediction results for CALCE battery dataset.
Figure 12. SOH prediction based on different prediction models: (a) SOH prediction results for NASA battery dataset and (b) SOH prediction results for CALCE battery dataset.
Energies 18 01438 g012
Figure 13. Schematic diagram of stochastic weighted average method.
Figure 13. Schematic diagram of stochastic weighted average method.
Energies 18 01438 g013
Figure 14. SOH prediction based on the prediction framework of periodic time series for NASA battery dataset.
Figure 14. SOH prediction based on the prediction framework of periodic time series for NASA battery dataset.
Energies 18 01438 g014
Table 1. Charge and discharge details for the NASA and CALCE battery datasets.
Table 1. Charge and discharge details for the NASA and CALCE battery datasets.
BatteryCC ChargingCV ChargingMinimal ChargeCharging/DischargingCC Discharging
Current (A)Voltage (V)Current (mA)Cut-Off Voltage (V)Current (A)
B00051.54.2204.2/2.72
B00064.2/2.5
B00074.2/2.2
B00184.2/2.5
B00451.54.2204.2/2.01
B00464.2/2.2
B00474.2/2.5
B00484.2/2.7
B00531.54.2204.2/2.02
B00544.2/2.2
B00554.2/2.5
B00564.2/2.7
CS2_350.554.2504.2/2.71.099
CS2_36
CS2_37
CS2_38
Table 2. Parameters of the GRU.
Table 2. Parameters of the GRU.
ParameterValueParameterValue
OptimizerAdamLoss functionMSE
Activation functionSigmodImplicit unit100
DropOut discard rate0.2LearnRateDropPeriod350
LearnRateDropFactor0.01InitialLearnRate0.001
Min Batch size16Max Epoch500
Table 3. Prediction results under different strategies for the NASA battery dataset.
Table 3. Prediction results under different strategies for the NASA battery dataset.
Starting PointStrategyTime SeriesRMSEMAEMAPEMSE
1Strategy I7-6-56-50.00460080.00285390.174620.000021168
7-6-18-50.00330340.00194970.119890.000010913
18-6-7-50.00455240.00191460.116530.000020724
7-6-50.00354080.00185850.110860.000012537
7-6-56-18-50.00247720.00141960.0870630.0000061367
53.85023.8241245.798314.8243
Strategy II6-7-18-50.00494680.00236740.143810.000024471
Strategy III45-53-18-50.0256160.00576920.328580.00065618
Strategy IV53-45-18-50.0267660.00537080.309040.0007164
3Strategy I7-6-56-50.00313710.00193620.11970.0000098413
7-6-18-50.00198110.00102280.0626910.0000039246
18-6-7-50.00435720.00211860.131230.000018985
7-6-50.00435720.00162040.0989690.0000074548
7-6-56-18-50.00303740.00165350.0977750.0000092261
50.298190.2452117.2010.088917
Strategy II6-7-18-50.00491350.00275070.171470.000024142
Strategy III45-53-18-50.027490.00675760.392240.00075571
Strategy IV53-45-18-50.0214240.00622060.369670.00045898
5Strategy I7-6-56-50.00470680.00264040.161450.000022154
7-6-18-50.00358710.00171280.102540.000012867
18-6-7-50.00278050.00158930.0994140.0000077311
7-6-50.00345450.00224970.142180.000011933
7-6-56-18-50.00365940.00228340.134760.000013391
50.302560.2514417.6190.091542
Strategy II6-7-18-50.00477880.00236060.145050.000022837
Strategy III45-53-18-50.0431380.0095370.536790.0018609
Strategy IV53-45-18-50.02970.00617880.3530.00088207
10Strategy I7-6-56-50.00524440.00261010.160160.000027504
7-6-18-50.00463020.00222210.131060.000021439
18-6-7-50.00350770.00184750.11150.000012304
7-6-50.00403730.00188930.114380.0000163
7-6-56-18-50.00455340.00237170.142110.000020733
50.295160.2449817.23310.087117
Strategy II6-7-18-50.00511430.00216170.128570.000026156
Strategy III45-53-18-50.0334010.00693030.396880.0011156
Strategy IV53-45-18-50.0246430.00550460.314280.00060726
34Strategy I7-6-56-50.00389020.0017280.108370.000015134
7-6-18-50.00443410.00220490.138890.000019661
18-6-7-50.00400540.00141680.0887610.000016044
7-6-50.00449550.00287860.18390.000020209
7-6-56-18-50.00400020.00180350.112010.000016002
50.166850.1411510.07290.027839
Strategy II6-7-18-50.00513040.00203260.124980.000026321
Strategy III45-53-18-50.0368590.00827210.486130.0013586
Strategy IV53-45-18-50.0315220.00664120.386070.00099363
68Strategy I7-6-56-50.00942380.00301650.192020.000088808
7-6-18-50.00906230.003150.202630.000082126
18-6-7-50.00963460.0038840.250140.000092826
7-6-50.00939960.00369510.240250.000088353
7-6-56-18-50.00905120.00357610.231070.000081925
50.0128180.00673330.457430.00016429
Strategy II6-7-18-50.0118560.00403780.257980.00014056
Strategy III45-53-18-50.0268770.00702630.450010.00072235
Strategy IV53-45-18-50.0235740.00578870.372390.00055572
The numbers in bold represent the best values. In the “Starting Point” column, ‘1’, ‘3’, ‘5’, ‘10’, ‘34’, and ‘68’ indicate that the SOH of Li-ion batteries are predicted to start at the 1st, 3rd, 5th, 10th, 34th, and 68th cycle, respectively. In the “Time Series” column, ‘7-6-56-5’ indicates that the time series is composed of the degradation data of battery B0007, battery B0006, battery B0056, and partial data of battery B0005 connected in sequence, with the constructed time series used to predict the rest of the data for battery B0005. The meaning of other expressions is similar.
Table 4. Prediction results under different strategies for the CALCE battery dataset.
Table 4. Prediction results under different strategies for the CALCE battery dataset.
Starting PointStrategyTime SeriesRMSEMAEMAPEMSE
1Strategy I38-37-36-350.00945380.00497960.582410.000089374
36-37-38-350.0126320.00602180.73390.00015956
352.25722.2286270.8795.095
Strategy II37-36-38-350.0119240.00535790.620130.00014218
10Strategy I38-37-36-350.0104790.00476680.604210.0001098
36-37-38-350.0112570.00535940.65310.00012673
350.266280.2125235.77940.070906
Strategy II37-36-38-350.0107330.00510670.640680.00011521
354Strategy I38-37-36-350.00877990.00417850.584260.000077086
36-37-38-350.00974130.00463810.655740.000094892
350.0444730.0230475.68270.0019778
Strategy II37-36-38-350.0106420.0050010.758210.00011324
The numbers in bold represent the best values. In the “Time Series” column, ‘38-37-36-35’ indicates that the time series is composed of the degradation data of battery CS2_38, battery CS2_37, battery CS2_36, and part of the data for battery CS2_35 connected in sequence. The constructed time series is used to predict the rest of the data for battery CS2_35. The meaning of other expressions is similar.
Table 5. Prediction results with different prediction models.
Table 5. Prediction results with different prediction models.
DatasetStrategyTime SeriesRMSEMAEMAPEMSE
NASAGRU-Strategy I18-6-7-50.00350770.00184750.11150.000012304
LSTM-Strategy I0.00970190.00438140.264320.000094126
GRU50.295160.2449817.23310.087117
LSTM0.344550.2904920.35660.11871
CALCEGRU-Strategy I38-37-36-350.0104790.00476680.604210.0001098
LSTM-Strategy I0.020720.00900211.38170.00042932
GRU350.266280.2125235.77940.070906
LSTM0.322820.2517142.99440.10421
The numbers in bold represent the best values. In the “Strategy” column, ‘GRU-Strategy I’ indicates that the prediction model is GRU and the time series is the periodic time series constructed using Strategy I (correlation selection + correlation sorting). The meaning of other expressions is similar.
Table 6. Prediction results based on the prediction framework of periodic time series for NASA battery dataset.
Table 6. Prediction results based on the prediction framework of periodic time series for NASA battery dataset.
Time SeriesRMSEMAEMAPEMSE β Remark
18-5-7-60.0173380.00524160.350820.00030061 Strategy I
60.0187510.00701840.486950.00035162
Prediction Fusion
of 18-5-7-6 and 6
0.0148270.00994830.712240.000219830.5Balance, 20 points/group
0.0110260.00610860.437430.000121570.2More certain, 20 points/group
0.0202080.0143321.0280.000408360.8More stochastic, 20 points/group
0.0142280.00752680.533570.000202440.5Balance, 10 points/group
The numbers in bold represent the best values.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cui, C.; Xia, G.; Jia, C.; Wen, J. A Novel Construction Method and Prediction Framework of Periodic Time Series: Application to State of Health Prediction of Lithium-Ion Batteries. Energies 2025, 18, 1438. https://doi.org/10.3390/en18061438

AMA Style

Cui C, Xia G, Jia C, Wen J. A Novel Construction Method and Prediction Framework of Periodic Time Series: Application to State of Health Prediction of Lithium-Ion Batteries. Energies. 2025; 18(6):1438. https://doi.org/10.3390/en18061438

Chicago/Turabian Style

Cui, Chunsheng, Guangshu Xia, Chenyu Jia, and Jie Wen. 2025. "A Novel Construction Method and Prediction Framework of Periodic Time Series: Application to State of Health Prediction of Lithium-Ion Batteries" Energies 18, no. 6: 1438. https://doi.org/10.3390/en18061438

APA Style

Cui, C., Xia, G., Jia, C., & Wen, J. (2025). A Novel Construction Method and Prediction Framework of Periodic Time Series: Application to State of Health Prediction of Lithium-Ion Batteries. Energies, 18(6), 1438. https://doi.org/10.3390/en18061438

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop