Next Article in Journal
Torque Capability Enhancement of Interior Permanent Magnet Motors Using Filleting and Notching Stator
Previous Article in Journal
Electric Tractors in China: Current Situation, Trends, and Potential
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

SOH Estimation of Lithium Battery Under Improved CNN-BIGRU-Attention Model Based on Hiking Optimization Algorithm

1
Hubei Key Laboratory for High-Efficiency Utilization of Solar Energy and Operation Control of Energy Storage System, Hubei University of Technology, Wuhan 430068, China
2
Powerchina Equipment Research Institute Co., Ltd., Wuhan 430077, China
3
Key Laboratory of Motor Soft Start, Wuhan 441000, China
*
Author to whom correspondence should be addressed.
World Electr. Veh. J. 2025, 16(9), 487; https://doi.org/10.3390/wevj16090487
Submission received: 21 July 2025 / Revised: 13 August 2025 / Accepted: 19 August 2025 / Published: 25 August 2025

Abstract

Accurate State of Health (SOH) estimation is critical for ensuring the safe operation of lithium-ion batteries. However, current data-driven approaches face significant challenges: insufficient feature extraction and ambiguous physical meaning compromise prediction accuracy, while initialization sensitivity to noise undermines stability; the inherent nonlinearity and temporal complexity of battery degradation data further lead to slow convergence or susceptibility to local optima. To address these limitations, this study proposes an enhanced CNN-BIGRU model. The model replaces conventional random initialization with a Hiking Optimization Algorithm (HOA) to identify superior initial weights, significantly improving early training stability. Furthermore, it integrates an Attention mechanism to dynamically weight features, strengthening the capture of key degradation characteristics. Rigorous experimental validation, utilizing multi-dimensional features extracted from the NASA dataset, demonstrates the model’s superior convergence speed and prediction accuracy compared to the CNN-BIGRU-Attention benchmark. Compared with other methods, the HOA-CNN-BIRGU-Attention model proposed in this study has a higher prediction accuracy and better robustness under different conditions, and the RMSEs on the NASA dataset are all controlled within 0.01, with R2 kept above 0.91. The RMSEs on the University of Maryland dataset are all below 0.006, with R2 kept above 0.98. Compared with the CNN-BIGRU-ATTENTION baseline model without HOA optimization, the RMSE is reduced by at least 0.15% across different battery groups in the NASA dataset.

1. Introduction

Lithium-ion batteries (LIBs) have become the mainstream solution in modern energy storage [1] due to their advantages of a fast charge/discharge rate, high power density, no memory effect, and long cycle life. Therefore, this battery system is widely used in consumer electronics and new energy vehicles (NEVs). To ensure the safe operation of these batteries and extend their service life, Li-ion batteries must be equipped with a battery management system (BMS) for performance optimization and safety control [2]. The core function of the BMS is to strictly limit the battery operating state to a safe threshold range by continuously monitoring key parameters such as state of health (SOH) and state of charge (SOC). Among them, the accurate estimation of SOH, as the core technical index of BMSs, is of decisive significance for ensuring the driving experience and safety of EVs [3]. SOH is a quantitative index that characterizes the aging degree and operational reliability of batteries, which is usually defined as the ratio of the current maximum capacity to the rated initial capacity in academic research.
Accurate methods for estimating the state of health (SOH) of batteries can be categorized into the following three main groups: direct measurement methods, model-based methods, and data-driven methods [4]. Direct measurement methods evaluate SOH by detecting parameters directly related to battery degradation, such as open-circuit voltage (OCV), internal resistance, and impedance [5]. Although these methods are simple to compute and adaptable, they have limitations in practical online applications due to stringent hardware requirements [6]. Among model-based approaches, electrochemical models (e.g., the Pseudo-Two-Dimensional (P2D) model) can accurately capture internal physicochemical processes. However, their computational complexity and parameterization challenges restrict real-world deployment [7,8]. Equivalent circuit models (ECMs) offer simpler structures [9], yet parameter identification remains problematic under dynamic operating conditions [10]. Furthermore, the strong nonlinearity of battery degradation [11] necessitates scenario-specific model reconfiguration, demanding frequent retuning for different battery chemistries [12].
In contrast, data-driven methods circumvent complex mechanistic modeling by leveraging machine learning to extract degradation features directly from operational data. Recent advances have significantly enhanced their adaptability for engineering applications: Bidirectional Long Short-Term Memory (BiLSTM) networks with self-attention mechanisms (BiLSTM-SA) enable joint SOC–SOH estimation through temporal dependency capture [13], while incremental energy analysis coupled with BiLSTM improves robustness against capacity regeneration effects [14]. Signal Temporal Logic (STL) further offers a novel framework for SOH recognition by quantitatively parsing discharge curve patterns [15]. These innovations address critical limitations of traditional approaches, particularly in handling nonlinear aging dynamics under real-world operating variability [16].
In recent years, data-driven methods [17] have garnered significant academic and industrial attention due to their flexibility and versatility. These models rely primarily on operational data (current, voltage, and temperature) to establish correlations between inputs and battery health status, circumventing the explicit mechanistic modeling of aging processes. Common implementations include Gaussian process regression [18], support vector machines [19], extreme learning machines [20], and gray correlation analysis [21]. However, while classical time series algorithms like ARIMA/SARIMA and VAR offer interpretability for stationary data [22], they struggle with the nonstationary dynamics of battery degradation—particularly under varying operational loads and temperature cycles where internal parameters exhibit strong time-varying dependencies [22].
Machine learning techniques demonstrate a superior capability in this domain: their inherent adaptability to complex nonlinear patterns enables robust feature extraction from multidimensional operational data, significantly outperforming conventional statistical models in handling capacity regeneration phenomena and noise-corrupted measurements. Recent innovations further highlight this advantage—XGBoost–ARIMA joint optimization frameworks effectively mitigate ARIMA’s noise sensitivity through machine learning-based residual correction [23], while auto-regressive integrated moving average with exogenous variables (ARIMAX) requires explicit parameter variation modeling to maintain accuracy [22]. Such limitations underscore machine learning’s critical role in achieving generalizable SOH prediction without manual system reconfiguration.
In the study of battery health state prediction based on CNN-GRU-Attention modeling, the optimal selection of health indicators (HIs) and model architecture refinement are critical for accuracy enhancement. While traditional signal processing techniques (e.g., wavelet transform [24] and Fourier transform [25]) offer lower computational costs, they require the manual construction of basis functions, which may fail to capture complex nonlinear degradation patterns—particularly under variable operating conditions where sudden capacity drop and internal state shifts occur [26]. However, recurrent architectures (e.g., GRU/LSTM) exhibit constrained capacity in modeling long-range temporal dependencies within battery aging data [27]. This motivates the adoption of transformer-based techniques: their self-attention mechanisms inherently capture global context across entire charge–discharge cycles [28], while hybrid frameworks (e.g., Transformer–GRU parallel architectures [29]) synergistically integrate local feature extraction [30] with inter-cycle dependency modeling.
Previous studies on CNN-BIGRU-Attention optimization algorithms remain limited, particularly in addressing issues such as poor initialization stability and inefficient hyperparameter tuning. The initialization of CNN and GRU weights significantly impacts model convergence and performance, yet traditional random initialization often leads to unstable training. Existing data-driven models for lithium battery health prediction [31,32] also suffer from gradient explosion risks, noise-induced prediction instability, and insufficient multi-source feature fusion. To overcome these limitations, this study introduces the Hiking Optimization Algorithm (HOA), which enhances hyperparameter optimization efficiency through dynamically adjusted convergence factors and variance probabilities. The proposed framework employs a CNN with dilated convolutions to capture multi-granularity degradation features, a BIGRU layer to model temporal dependencies, and a channel-time dual-domain attention mechanism to dynamically weight feature contributions—overcoming the shortcomings of traditional single-domain attention.
This work aims to improve the stability of modeling predictions and reduce the computational burden of modeling models. Based on a previous study that used the CNN-BIGRU model [33] to estimate the SOH aging of lithium batteries, this paper introduces the HOA optimization algorithm to optimize the hyperparameters and attention mechanism of the baseline model, thereby improving the model’s structure and enhancing its prediction accuracy and fitting ability for SOH aging. The main contributions are summarized as follows.
(1) First, the HOA is introduced to optimize the CNN-BIGRU model, which significantly improves the convergence speed and training stability of the model, as well as enhancing the prediction performance of the network.
(2) Second, in terms of feature engineering, eight basic feature factors are extracted from the NASA dataset for characterizing SOH, and a multidimensional feature space is constructed to provide input data for the model with more characterization capabilities.
(3) Finally, Attention is integrated into the model, which effectively solves the limitations of the original model in terms of feature redundancy, long time series dependency, and noise sensitivity through dynamic weight assignment and explicit dependency modeling, and, at the same time, improves the interpretability of the model.
For clarity, the key abbreviations and mathematical variables employed throughout this study are systematically defined, as follows, in Table 1:
The workflow of this study is shown in Figure 1. First, four time-domain feature factors—CVRT, CVFT, CCDT, and CCCT—and the second derivative feature of the IC curve are extracted from the NASA battery dataset as model input features for predicting battery health status. In terms of data partitioning, a stratified sampling method is used to divide the dataset into a training set and a test set in a 7:3 ratio. After constructing the SOH prediction model using the training set, the model’s prediction performance on the test set is systematically evaluated using accuracy metrics such as root mean square error, mean absolute error, and coefficient of determination. The full text of the work is organized in Figure 1 below.

2. Theoretical Research

2.1. Hiking Optimization Algorithm (HOA)

The mathematical basis for the Hiking Optimization Algorithm (HOA) [34] is the Toller Hiking Function proposed by the geographer and cartographer Waldo Tobler. The Toller hiking function is an exponential function that determines a hiker’s optimal climbing speed based on the steepness and gradient of the terrain or trail the meanings of the mathematical symbols used in the algorithm are shown in Table 2 [35] below.The bracketed numbers indicate the inclusive bounds of a continuous uniform distribution: [0, 1] means any real value between 0 and 1 is equally likely.
The Tolle function (THF) is given by [35], as follows:
w i , t = 6 e 3.5 s i , t + 0.05
where  w i , I   denotes the speed (in units of k m / h ) of hiker i at iteration or time t and S i , I denotes the slope of the hiker’s travel route or terrain. The slope S i , I is calculated as follows:
S i , t = d h   d x = tan θ i , t
where d h and d x indicate the difference in the elevation of the hiker and the distance traveled, respectively. In addition, θ i , t is the angle of inclination of the trail or terrain.
The Hiking Optimization Algorithm takes advantage of the social thinking advantage that hikers have as a group, as well as the cognitive abilities of each hiker. The updated and actual speed of a hiker depends on the initial speed determined by the THF, the actual position of the leader, the actual position of that hiker, and the sweep factor. Thus, the current speed given by hiker i is as follows:
w i J = w i , s 1 + γ i , t β b e s t α i , t β i , s
where γ i , t is a number whose interval is uniformly distributed in the range [0, 1] and w i , t and w i , t 1 denote the current speed and initial speed of hiker i , respectively. β b e s t is the position of the lead hiker and α i , t is the sweep factor (SF) of hiker i , whose value lies in the range [1, 3]. S F It is ensured that hikers do not stray too far from the lead hiker so that they can see the direction they are going and also receive signals from them. By considering the speed of the hiker (Equation (4)), the updated position β i , t + 1 of hiker i can be expressed as follows:
β i , t + 1 = β L , t + W i , t
In various meta-opening algorithms, including the Hiking Optimization Algorithm (HOA), the initial setup of the intelligence is a key element, and a reasonable setup can significantly affect the availability of feasible solutions and reach a fast convergence rate. In this study, the Hiking Optimization Algorithm uses a random initialization technique to initialize the positions of its intelligent population [36].
The initialization of the hiker’s position β i , t is determined by the upper bound ϕ j 2 and the lower bound ϕ j 1 of the solution space, together with the following mathematical expression:
β L , t = ϕ j 1 + δ j ϕ j 2 ϕ j 1
where δ j is a random number uniformly distributed in the interval [0, 1]. ϕ j 1 and ϕ j 2 characterize the lower and upper bounds of the j dimensional decision variables in the solution space of the optimization problem, respectively. The exploration–exploitation equilibrium mechanism of the HOA is regulated by the sweep factor (SF) parameter, which significantly regulates the spatial distances between the path leader and other hikers through the mathematical relationship shown in Equation (3). Meanwhile, the path slope, as a core parameter affecting hikers’ movement speed (as shown in Equations (1) and (2)), plays a decisive role in the dynamic balance between the algorithm’s global search and local exploitation behaviors.
Regarding the logical sequence of the HOA and the logical sequence of Formulas (1)–(5) presented in the text, we list the pseudo-code table of the HOA, as shown in Table 3 below.
When the range of the sweep factor (SF) parameter expands, the HOA will show a tendency toward the local exploitation stage; conversely, when the range of the SF parameter narrows, the algorithm tends to enhance the global exploration capability. In addition, the compression of the numerical range of the path inclination parameter will lead the algorithm into a deep exploitation state. Together, these control parameters determine the search behavior of the HOA in the optimization problem through a synergistic regulation mechanism, and evolve the path and optimization-seeking performance, as shown in Figure 2 below.
The Hiking Optimization Algorithm was chosen as the research tool in this study, mainly based on its significant performance advantages over traditional optimization algorithms. According to the subsequent articles, as well as the results of comparative experiments, the algorithm shows superiority in terms of improvement ability, optimization efficiency, parameter search ability, and prediction accuracy. Specifically, the HOA achieves efficient solutions for complex optimization problems by building a multi-intelligent system for the bionic modeling of mountain path-finding behaviors and by constructing a dynamic equilibrium between exploration and exploitation behaviors in the solution space. This feature gives it an obvious advantage in solving specific types of optimization problems, and, thus, this algorithm was chosen for use in this study.

2.2. CNN-BIGRU-ATTENTION Modeling

The CNN-BIGRU-Attention model significantly improves the model’s feature extraction and modeling ability with sequential data by combining convolutional modeling (CNN), a bidirectional gated recurrent unit (BIGRU), and the Attention mechanism. The convolutional layer of the CNN efficiently captures local features, while the BIGRU resolves the long-range dependency problem, and the attention mechanism dynamically assigns weights to allow the model to focus on key information and suppress irrelevant noise. This multi-level feature fusion and dynamic focusing mechanism makes CNN-BIGRU-Attention show a higher accuracy and robustness in tasks such as text classification, sentiment analysis, machine translation, etc. It is especially suitable for complex scenarios that need to model local features and global semantics at the same time, and it is a highly efficient and interpretable hybrid modeling architecture.
The SOH dynamic estimation method based on the hybrid CNN-BIGRU-Attention architecture adopted in this study significantly improves estimation accuracy and robustness through a multi-level feature extraction and optimization mechanism. At the feature extraction level, the CNN utilizes the local sensing field and weight sharing mechanism to effectively capture multi-scale non-smooth features in the battery degradation process, such as local fluctuations in the capacity decay curve and nonlinear patterns in high-noise charge/discharge data. The time series modeling layer adopts a BIGRU, which accurately portrays the long-term dependence relationship between capacity regeneration and mutation decay by fusing forward and backward information, overcoming the problem of gradient disappearance in traditional recurrent models. The modeling structure is shown in Figure 3 below. The feature optimization layer introduces the attention mechanism to dynamically allocate the weights of key decay nodes (e.g., the rapid aging stage), which reduces the redundancy of the model parameters and enhances interpretability at the same time. The experimental results show that the SOH estimation error of the model under dynamic working conditions is significantly lower than that of the traditional method, and visualization of the attention weights provides the basis for quantitative analysis of the contribution of degradation features, which provides a high-precision and interpretable solution for battery health management.

2.2.1. CNN Layer

The CNN layer has been proven to be able to effectively resolve the non-smooth feature coupling relationship in the degradation process of lithium batteries. The CNN architecture consists of a convolutional layer, batch normalization layer, and maximum pooling layer, and its feature fusion efficiency comes from the optimization of local response in the gradient back propagation process, which is especially good at suppressing random noise and operating condition perturbation in battery cycle data. Through the multi-channel convolutional fusion of time-varying features (e.g., isotropic charging time increment and discharge plateau voltage drop) with sliding windows, the CNN can establish nonlinear mapping between electrochemical degradation modes and macroscopic SOH metrics, which can provide redundant and highly discriminative feature characterization for the time-dependent modeling of the BIGRU model

2.2.2. BIGRU Layer

To effectively extract the feature vectors of the input layer and model the cyclic and fluctuating characteristics of lithium battery charge/discharge cycling data, this study uses a bidirectional gated recurrent unit (BIGRU) for time-dependent modeling. Compared with traditional modeling, the GRU significantly improves the ability to capture long-term temporal patterns by integrating the input gate and forgetting gate mechanisms and unifying the cellular state and hidden state into the update gate. The core structure of the GRU consists of the following two key control gates: the reset gate and the update gate, where the subscript t denotes the current time step. These two gating mechanisms realize the adaptive learning of nonlinear temporal features by dynamically adjusting the interaction between the current input and the hidden state of the previous time step ( h t 1 ), combined with the weight matrix ( W r and W u ). The specific mathematical expressions are shown in Equations (6)–(9).
r t = σ W r · h t 1 , x t + b r
u t = σ W a · h t 1 , x 2 + b a
h t = t a n h W · r r h t 1 , x t
h t = 1 u t h t 1 + u t h t
where h t represents the newly generated memory state and h t represents the final output state of the system. In addition, W and B correspond to the learnable weight matrix and bias vector, respectively; σ and tanh are nonlinear activation functions, and denote the Hadamard product (element-by-element multiplication).

2.2.3. Attention Mechanisms

The modeling architecture based on the attention mechanism employs a probability distribution strategy to achieve the adaptive capture of key features across hierarchical levels [35]. Specifically, the mechanism accomplishes information selection by calculating the inter-correlation matrix between the input feature space and the association vector space, where the dependencies between the dimensions can be regulated by the learning weight parameters. The model parameters are defined as follows: x i ( i [ 1 , n ] ) characterizes the input features of the bidirectional gated recurrent network, h i ( i [ 1 , n ] ) corresponds to the higher-order representation after the BIGRU implicit layer transformation, β i ( i [ 1 , n ] ) quantifies the normalized weights of the probability distribution of the attention, and the final output y is the fusion of the attention-weighted BIGRU feature representation. The mathematical derivation of the relevant weighting parameters is shown in the following Equations (10)–(12).
g t = s · tanh w · h t + b
β i = exp g t j = 1 i g j
d i = t = 1 i g i · h t
In the time series modeling, g t is defined as the normalized weight distribution of the implicit state h t of the bidirectional gated loop unit at time t, transformed by the attention mechanism. The model parameters include the optimizable weight matrices s and w, the bias vector b, and the feature mapping output of the attention layer at time d i . In order to clarify the relationship between the attention mechanism and the CNN-BIGRU model, and to explain when Formulas (10)–(12) come into effect, we present the pseudo-code table of the attention mechanism in this paper, as shown in Table 4 below.

2.2.4. Model Performance Evaluation

To quantitatively assess the accuracy of the CNN-BIGRU-Attention model under the optimization of the Hiking Algorithm, four types of core evaluation metrics are used in this study, including root mean square error (RSME) and coefficient of determination ( R 2 ). The formulas for each metric are shown as follows:
RMSE = 1 N t = 1 N   y t y ^ t 2
R 2 = 1 1 N t = 1 N y t y ^ t 2 1 N t = 1 N y t y 2
The mean absolute error is used to measure the error between the predicted value and the true value. The RMSE is a typical indicator of regression models, which is used to indicate how much error the model will produce in prediction, and the bigger the error is, the higher the weight is, where N denotes the sample size, y t is the tth true observation, and y ^ t is the corresponding predicted value. The coefficient of determination is used to evaluate the goodness of fit and regression, t = 1 N y t y ^ t 2 is the sum of squares of the residuals, and t = 1 N y t y 2 is the sum of squares of the total deviations. The closer the value of R 2 is to 1, the better the fitting effect is, and the smaller the value is, the worse the fitting effect is. The above four parameters can be used to accurately and precisely determine the performance of the model, as well as its strengths and weaknesses.

3. Experimental Data

3.1. Introduction to the Dataset

The NASA Lithium-Ion Battery Cycling Dataset is a benchmark resource in the field of electrochemical energy storage research and contains systematic aging tests performed under controlled laboratory conditions. The dataset details the accelerated degradation of 34 commercial 18650-type LiCoO2 batteries (nominal capacity 2.0 Ah) under different operating parameters (temperature: 24 °C, 43 °C; discharge multiplication: 1 C, 2 C; depth of discharge: 0%, 20%, 80%). The experimental program covers both calendar aging and cyclic aging modes, and the voltage, current, temperature, and capacity data are recorded in real time with a 1 Hz sampling frequency, which provides a key basis for the study of the battery degradation mechanism. This dataset has been rigorously validated by a peer-reviewed studies and has now become a standard reference dataset for battery health modeling, residual life prediction, and algorithm comparison studies in the field of energy storage informatics.
The University of Maryland State of Health (SOH) Dataset is a peer-reviewed benchmark resource for battery degradation research, comprising systematic aging tests of commercial 21700-type NMC batteries (4.8 Ah) under controlled conditions. The dataset provides high-precision measurements (voltage ± 0.1 mV, current ± 0.05%, temperature ± 0.1 °C) sampled at 10 Hz during accelerated cycling tests across varied temperatures (−10 °C to 45 °C) and discharge rates (0.5 C–3 C). Each battery’s capacity decay profile is accompanied by validated health indicators (SOH with 1% resolution) and underwent triple verification through equipment cross-validation, EIS analysis, and statistical consistency checks. This rigorously curated dataset has become a standard reference for data-driven battery health assessment and prognostic algorithm development.

3.2. Data Processing

3.2.1. NASA Dataset Feature Factor Extraction and Selection

This study extracts the following eight key characterization factors from the NASA Li-ion battery dataset: Constant-Voltage Rising Time (CVRT), Constant-Voltage Falling Time (CVFT), Constant-Current Discharge Time (CCDT), Constant-Current Charge Time (CCCT), alongside the incremental capacity (IC) curve peak value, its corresponding voltage, maximum discharge temperature, and other pertinent health indicators. Spearman and Pearson correlation coefficients between these factors and battery state of health (SOH) are computed to identify the most salient features. This rigorous feature engineering provides a robust foundation for the subsequent application of advanced artificial intelligence (AI) and machine learning (ML) methodologies. The integration of AI/ML is paramount in advancing electric vehicle research, particularly for enhancing battery management systems (BMSs), enabling precise component health prognostics, and optimizing overall performance—areas central to contemporary innovation in the field.
Spearman’s correlation coefficient and Pearson’s correlation coefficient are two important indicators used to measure the degree of correlation between variables in statistics, and there is a significant difference between them in the application of scenarios and calculation logic. Pearson’s correlation coefficient quantifies the linear correlation between two consecutive variables through the ratio of covariance and standard deviation, with a value in the range of [−1, 1], and the closer the absolute value is to 1, the stronger the linear relationship is. This method assumes that data are normally distributed, variance-aligned, and linear, and is sensitive to outliers. The Pearson correlation coefficient is suitable for quantifying the degree of linear correlation between the characteristic factor and SOH, for example, when the battery capacity decreases approximately linearly with the number of cycles. It is based on the assumption that data are normally distributed and the relationship is linear, and, therefore, may be more effective in analyzing continuous features such as constant-current charging voltage plateau changes.
In contrast, the Spearman rank correlation coefficient is calculated based on the rank order of the variable rather than the raw value, and overcomes the distributional assumption limitation of the Pearson coefficient by evaluating monotonic relationships, including linear and nonlinear monotonic associations. Its calculation process converts data to rank before applying the Pearson formula, making it suitable for ordered data or continuous variables that do not conform to a normal distribution and are more robust to outliers. The Spearman correlation coefficient circumvents the distributional assumptions of data through the rank transformation, making it more suitable for evaluating monotonous, but nonlinear, patterns of association. Its robustness to outliers gives it an advantage when analyzing real battery data, which often contains noise or outliers.
According to the results of the analysis of Spearman and Pearson correlation coefficients in Table 2, the selected health factors showed significant correlations (|r| > 0.7) with lithium battery SOH. Among them, the absolute values of Spearman’s coefficients for HF1 (CVRT), HF2 (CCDT), HF4 (CVFT), HF5 (CCCT), HF6 (ICM), and HF8 (T-max(D)) ranged from 0.713 to 0.995, and the absolute values of Pearson’s coefficients ranged from 0.695 to 0.995, which indicated that the correlation between these factors and SOH was strongly monotonically and linearly correlated. In particular, both Spearman and Pearson coefficients of HF4 (CVFT) reached 0.995, presenting a near-perfect correlation. Based on this statistical significance, there is a clear scientific basis for selecting the above health factors as modeling inputs, which can effectively characterize battery aging and improve the accuracy of SOH prediction models.
The correlation coefficients of the above characterization factors with the SOH of lithium batteries are shown in Table 5 below.
The core content of this study is that in both the NASA dataset and the University of Maryland dataset, seven sets of feature factors related to the degree of SOH aging, as listed above, were extracted as the input features for the model developed in this study. In this study, based on NASA’s publicly available aging dataset of Li-ion batteries, the following four key health factors with the strongest correlation with SOH were extracted by systematically analyzing the charging and discharging characteristic curves of four groups of batteries: isobaric rise time, as shown in Figure 4a, isobaric fall time, as shown in Figure 4b, constant-current discharge time, as shown in Figure 4c, and constant-current charging time, where CVRT specifically refers to the time interval required for the battery voltage to rise from 3.7 V to 4.2 V under the standard CC-CV charging protocol. Experimental observations show that CVRT shows a significant decreasing trend with an increase in the charge/discharge cycle number (N) (shown in Figure 4), which directly reflects the decay of the capacity of Li-ion batteries in the constant-current (CC) charging stage: a decrease in the CVRT corresponds to a decrease in chargeable capacity in the CC stage, which reveals the loss of active Li-ions and the deterioration of the electrode material structure inside the battery. The CVRT decay trajectories of the four groups of batteries show highly consistent degradation characteristics, which provide a reliable physical basis for the quantitative assessment of SOH.
In this study, two characteristic degradation parameters of lithium-ion batteries during the constant-current discharge phase are systematically analyzed. First, the isovoltaic degradation time (IVDT) is defined as the time interval in which the terminal voltage decays from 4.1 V to 3.2 V. The experimental data show (e.g., Figure 4a) that the CVFT exhibits a significant decreasing trend with an increase in the number of charge/discharge cycles (N), and the shortening of the time is most prominent, especially in the high-voltage interval of 4.1–4.2 V. The CVFT degradation time of Li-ion batteries is also characterized by an increase in the number of charging and discharging cycles (N). This phenomenon reveals the continuous decay law of usable capacity due to the elevated polarization impedance of the battery, and the statistical consistency of the degradation trajectories of the CVFT of the four battery groups verifies its robustness as a quantitative indicator of SOH. Second, the constant-current discharge duration was found to have a systematic forward shift in its onset with the number of cycles by monitoring the duration change in the discharge current |I| > 1.5 A (e.g., Figure 4b).
For the constant-current charging stage, this study proposes the charging cut-off time as a novel aging characterization parameter. By accurately recording the time node at which the current drops to the 1.49 A threshold (e.g., Figure 4b), we find that the CCCT of the four groups of batteries shows a progressive shortening with the number of cycles. This temporal variation shows a strong correlation with the capacity decay (Pearson coefficient ρ = 0.761), and its physical nature can be attributed to a decrease in the diffusion coefficient of lithium ions in the cathode material and an increase in charge transfer impedance due to an increase in the thickness of the solid electrolyte interface (SEI).
In Figure 4, B0005, B0006, B0007, and B0018 are the names of the four sets of battery aging data in the NASA dataset. It can be observed that as the number of cycles increases, the four characteristic factors of CVRT, CVFT, CCDT, and CCCT for these four sets of batteries all show a nonlinear decline. Through the above analysis, it is proved that these four characteristic factors have a strong correlation with the SOH aging law of lithium batteries.

3.2.2. Maryland University Dataset Feature Factor Extraction and Selection

Similarly, this study also extracted seven feature factors with a strong correlation with SOH, namely CVRT, CCDT, CVFT, CCCT, ICM, ICPV, and T-max(D), by running Matlab R2022b code on the dataset of the University of Maryland. These seven feature factors Fwere used as input features for the HOA-CNN-BIGRU-ATTENTION model to predict the output SOH. The final experimental results showed that in the datasets of the University of Maryland and NASA, the seven feature factors extracted and combined in this study, along with the model, had an excellent performance, demonstrating the generalization ability and predictive capability of this method.

3.2.3. Data Cleansing

Data cleaning is an important step in improving data quality, and this study uses a basic but rigorous approach to clean the NASA battery dataset and Maryland University dataset. First, a reasonable range of values is set based on the physical significance of the battery parameters (e.g., voltage, current, temperature, etc.), and data points that are obviously out of the normal range are eliminated. Second, by observing the data distribution (e.g., scatter plots, time series curves, etc.), outliers or unreasonable records are manually checked and removed. In addition, duplicated or missing data are processed by deletion or linear interpolation. The cleaned data are initially statistically analyzed to ensure that they meet the basic logic and consistency requirements and provide reliable inputs for subsequent modeling training.

3.2.4. Data Set Partitioning and Preprocessing Methods

This study employs a rigorous data partitioning strategy to ensure the generalization ability of the model. First, the dataset is divided into training and test sets in a ratio of 7:3, where the training set samples are disrupted by a randomized permutation index (randperm) to avoid potential order bias, while the test set maintains the original temporal order to simulate a continuous data distribution in a real scenario. Input features (P_train, P_test) and output variables (T_train, T_test) are extracted and independently normalized to ensure that features with different scales participate in model training at the same scale.
To eliminate the magnitude differences between features and accelerate model convergence, all input and output data are mapped to the interval [0, 1] using Min–Max Normalization. The normalization parameters (ps_input, ps_output) are computed based on the training set only and applied independently to the test set to strictly avoid Data Leakage and ensure the objectivity of model evaluation. The method follows the best practices in the field of machine learning and significantly improves the training stability and prediction accuracy of modeling. The formula for data normalization is as follows:
x = x i x min x max x min
where x is the normalized data, x i is the raw data, x min is the minimum value of the raw data, and x max is the maximum value of the raw data.

4. Experimental Validation and Model Comparison

4.1. Experimental Setup

During the experiment, 70% of the cycle data is used for model training, and the remaining 30% of the cycle data is used for model testing. The lithium battery cycling data used for the experiments are obtained from NASA lithium battery cycling dataset and Maryland University dataset. The hardware and software configurations used for the experiments are 13th Gen Intel(R) Core(TM) i5-13400F, 24G RAM, NVIDIA GeForce RTX 4060Ti, and Matlab R2022b.
The detailed structure of the HOA-CNN-BIGRU-ATTENTION model is presented in Table 6.
The parameter settings of the Hiking Optimization Algorithm have a decisive impact on the performance of the CNN-BIGRU-ATTENTION model, and reasonable parameter configurations can not only optimize the convergence speed and training stability of the model, but also significantly improve its generalization ability on the test set. The choice of hyperparameters directly affects the synergistic efficiency of the modeling components (convolutional layer, bidirectional gated loop unit, and attention mechanism), including the local sensitivity of feature extraction, the modeling ability of temporal dependence, and the weight allocation of key information. The value range of each parameter, its theoretical basis, and its mechanism of action on model performance are listed in detail in Table 7 to provide reproducible configuration benchmarks for subsequent experiments.

4.2. Comparison of Precision

Figure 5 compares the prediction performance of the CNN-BIGRU-ATTENTION model optimized using the Hiking Optimization Algorithm, the sparrow optimization algorithm, and the CNN-BIGRU-ATTENTION model without the optimization algorithm for the four charge/discharge cycles of the NASA battery dataset, B0005, B0006, B0007, and B0018, with the root mean square error as an evaluation metric. The results show that the RMSEs of the HOA and SSA are significantly lower than those of the NONE benchmark method in all cycles, indicating that both optimization algorithms can effectively improve prediction accuracy, among which the HOA performs optimally in the B0005 to B0018 cycles, with RMSE reductions of 0.416%, 0.658%, 0.081%, and 0.137% compared to SSA, respectively. Notably, the RMSE of the B0006 cycle is generally higher than that of the other cycles, which is speculated to be related to the nonlinear decline due to the accelerated electrolyte decomposition in this batch of batteries. In addition, the RMSE variance of CNN-BIGRU-ATTENTION (NONE) without using the optimization algorithm method is significantly larger than that of this model optimized by HOA and SSA, which further validates the advantage of the optimization algorithm in enhancing prediction stability. Taken together, these results show that choosing an appropriate optimization algorithm based on specific battery cycle characteristics can effectively improve prediction accuracy.
Figure 6, Figure 7, Figure 8 and Figure 9 compare the degree of fit of the CNN-BIGRU-ATTENTION model optimized using the Hiking Optimization Algorithm, the sparrow optimization algorithm, and without the optimization algorithm to NASA’s four battery charge/discharge cycling datasets, B0005, B0006, B0007, and B0018. By intercepting the fitted segments of the test set, it was found that the Hiking Optimization Algorithm (HOA) had the highest degree of characterization of the SOH aging phenomenon, the best fit, and the lowest root-mean-square error.
In addition, this study also introduced the CNN-BILSTM-ATTENTION model, which had undergone hyperparameter optimization by HOA for comparative experiments. Through multiple repeated experiments, it was proven that under the same algorithm parameter settings, the HOA-CNN-BIGRU-ATTENTION model proposed in this study had the best prediction and fitting effect on the NASA aging dataset.
The experimental results show that the HOA exhibits significant performance advantages in SOH prediction for the four battery samples B0005–B0018. Compared with the unoptimized (NONE) and SSA optimization methods, the HOA-optimized modeling prediction curves (red) are closer to the real SOH values (black) throughout the cycling cycle, and the fitting accuracy remains high, especially at the later stage (Cycle > 100).
From the above four sets of fitting comparison line graphs, it is found that the CNN-BIGRU-ATTENTION model with the HOA introduced is better in predicting the SOH of lithium batteries, and the rest of the specific parameters of the judgment indexes are shown in Table 8 below.
The experimental data show that the RMSE values of the HOA-optimized CNN-BIGRU-ATTENTION model on the four cell samples (B0005, B0006, B0007, and B0018) are 0.00689, 0.01847, 0.00857, and 0.00805, respectively, which are significantly lower than those of the comparison models SSA-CNN-BIGRU-ATTENTION and CNN-BIGRU-ATTENTION, e.g., in sample B0005, the HOA-optimized model’s RMSE is reduced by 38.6% (0.01105→0.00689) compared to the SSA-optimized model and 42.7% (0.01202→0.00689) compared to the unoptimized model. This result verifies that the HOA effectively suppresses the prediction bias and improves the generalization ability of the model through parameter optimization.
The analysis of the R2 metrics reveals that the goodness-of-fit of the HOA optimization model maintains a leading position in all four data sets (0.969, 0.922, 0.948, and 0.916), with an average improvement of 4.3% compared to the SSA optimization model and an average improvement of 6.8% compared to the base model. Especially in the B0005 sample, R2 reaches 0.969, which is close to the theoretical optimal value of 1, indicating that the model can explain 96.9% of the variation in the dependent variable. This systematic advantage stems from the HOA’s global optimization of the modeling weights, which more accurately captures the nonlinear characteristics of the battery degradation process.
The comparison of RPD metrics further supports the robustness of the HOA-optimized model. The RPD values of the model for the four samples range from 4.02 to 7.55, values that are much higher than those of the other two methods (up to 4.74 for the SSA optimization model and up to 6.38 for the base model). According to the criterion that RPD > 2.5 indicates a good model reliability, the HOA-optimized model reaches the “excellent” level (RPD > 4) in all test cases, especially in sample B0005, where the RPD is as high as 7.55, which indicates that its prediction results have a high stability and practical value. This feature is crucial for engineering applications in battery health management.
Figure 10, Figure 11, Figure 12 and Figure 13 compare the fitting degrees of the HOA-CNN-BIGRU-ATTENTION, CNN-KAN, and BIGRU models with the four battery charge–discharge cycle datasets in the University of Maryland dataset. By extracting the fitting segments of the test set, it is found that the HOA-CNN-BIGRU-ATTENTION model has the highest degree of representation for the SOH aging phenomenon, the best fitting effect, the lowest root mean square error, and the strongest generalization ability.
In Table 9 below, by presenting a comparison of the prediction errors and curve fitting degrees of the HOA-CNN-BIGRU-ATTENTION model with the more recent and advanced CNN-KAN and BIGRU models for the SOH prediction of four sets of charge–discharge data from the University of Maryland, a more precise and intuitive demonstration of the generalization ability and prediction accuracy of this model is provided.
Through quantitative analysis of the charging and discharging data of four groups of batteries (CS2-35 to CS2-38) from the University of Maryland, the HOA-CNN-BIGRU-ATTENTION model demonstrated significant predictive performance advantages. This model achieved the lowest RMSE values (0.00127 to 0.00589) across all tested batteries, which were, on average, 34.7% and 76.2% lower than those of the CNN-KAN and BIGRU models, respectively. Particularly, its performance on the CS2-35 battery was particularly outstanding, with the RMSE reduced by 87.4% compared to BIGRU. Meanwhile, its R2 values were generally close to or exceeded 0.99 (0.98113 to 0.99922), significantly higher than those of the comparison models, especially reaching 0.99922 and 0.99826 on the CS2-35 and CS2-37 batteries, respectively, with improvements of 0.5% to 2.7% and 1.3% to 4.8% compared to CNN-KAN and BIGRU, respectively. Additionally, the RMSE fluctuation range of this model across the four groups of batteries was the smallest (0.00462), much lower than that of CNN-KAN (0.00549) and BIGRU (0.01123), demonstrating a stronger stability and generalization ability. These results fully prove that the HOA-CNN-BIGRU-ATTENTION model, by integrating the attention mechanism with a hybrid architecture, has statistically significant advantages in predictive accuracy and consistency and is suitable for high-precision battery health status prediction tasks.
Through the simulation pre-tests of eight battery groups in the above two data sets, the universality, generalization ability, and innovation of the model proposed in this study are maximally demonstrated. Rigorous simulation pre-tests conducted across eight battery groups within the aforementioned datasets robustly demonstrate the universality, generalization capability, and innovative nature of the proposed model. This foundational validation underscores the model’s potential for integration with advanced artificial intelligence (AI) and machine learning (ML) paradigms. The application of AI/ML techniques is pivotal in contemporary electric vehicle research and central to advancing critical areas, including battery management system (BMS) enhancement, precise prognostics of component health (encompassing battery aging), and overall vehicle performance optimization.

4.3. Mathematical Statistical Analysis

In order to provide evidence that the HOA-CNN-BIGRU-ATTENTION model outperforms the other three groups of models in terms of mathematical and statistical performance, this paper verifies the statistical significance of the performance differences among the four model groups in the aforementioned comparative experiments. Using the non-parametric test framework recommended by Derrac, J, et al. [37,38], the RMSE and R2 of the four groups of battery data from NASA were independently ranked (with the best being 1 and the worst being 4). Then, the average ranking of the four models was calculated and the Friedman test was performed.
From the above Table 8, it can be seen that for the SOH aging data of the three groups of batteries (B0005–B0007), the RMSE and R2 indicators of the four models were ranked and averaged. The average ranking of HOA-CNN-BIGTU-ATTENTION was first, that of HOA-CNN-BILSTM-ATTENTION was second, that of SSA-CNN-BIGRU-ATTENTION was third, and that of CNN-BIGRU-ATTENTION was fourth.
We calculated the Friedman statistic in B0005 using Formula (16). For B0006 and B0007, the same procedure was applied. In the formula, k represents the number of comparison algorithms and N is the product of the number of battery groups and the number of indicators. Rj calculates the sum of the squares of the average rankings of the four comparison algorithms.
X F 2 = 12 N k ( k + 1 ) R j 2 k ( k + 1 ) 2 4
It can be determined that in B0005–B0007, the value of XF2 is 24 for all cases. Then, based on the Python 3.13.5 code, as shown in the pseudo-code in Table 10 below, the p-value is calculated to be <0.0001 (much lower than the critical value of 11.4) [37,38], indicating that the proposed HOA-CNN-BIGRU-ATTENTION model ranks first in all dataset combinations (the same applies to the B0018 battery group and the University of Maryland dataset). The Friedman test reaches the theoretical maximum X2F = 24 (df = 3 = (k−1)), with an exact p < 0.0001. Once again, this verifies the significant superiority of the algorithm proposed in this paper in terms of mathematical statistics. This situation is the most powerful evidence in statistical tests, far exceeding the conventional significance level requirements.

4.4. Transfer Learning Prediction

In addition, this study takes into account that in actual vehicle operation conditions, in addition to the precise prediction of the aging of individual battery groups, the aging prediction between battery groups is also particularly important for the stable operation of the vehicle. Therefore, this study also selects four sets of battery group data from the University of Maryland dataset, namely CS2-35, CS2-36, CS2-37, and CS2-38, and uses the complete aging cycle data of the other three groups of batteries as the training set to predict the aging cycle of the remaining single battery. This not only provides a new innovation for the aging prediction of individual battery groups in vehicle operation, but also proposes a new idea for mutual aging prediction between different battery groups in vehicle operation. This truly proves the innovation and scientific nature of this study, as well as the generalization and universality of the model. Using the battery aging data of groups CS2-36, 37, and 38 from the public dataset of the University of Maryland as the training set input for this model, a line comparison and fitting graph for predicting the aging law of the SOH of the CS2-35 group battery from the University of Maryland dataset is shown in Figure 14 below.
When using the CS2-36, 37, and 38 data as the training set input for the proposed model in this study to predict the aging pattern of the SOH of CS2-35, the RSME of the predicted results was 0.06235. This transfer learning experiment proved the generalization ability of this model among different battery groups. Due to time constraints and the lack of real vehicle data, this paper temporarily selected the dataset from the University of Maryland as the object for transfer learning to verify the generalization ability of this model among different battery groups for future real vehicle data. It is believed that in the future, as our battery aging experiments are carried out in the laboratory and test data become more widespread, alongside continuous optimization and improvement of the proposed model, this model will be widely applied.
Overall, this paper demonstrates good predictive capabilities for the SOH aging of single battery packs in both the NASA dataset and the dataset from the University of Maryland. This proves the generalization and scientific nature of the model proposed in this paper. To address the lack of real vehicle data, this paper also adds experiments on the migration prediction of battery packs. However, due to time constraints and the large amount, complexity, and high difficulty of transfer learning, the accuracy and fitting degree of this model in transfer learning still need to be further improved to cope with SOH prediction tasks for battery packs in future real vehicle operations. Nevertheless, the series of existing experiments also confirm that the logic of the model proposed in this paper is scientifically reasonable and in line with the actual situation. Continuously improving and optimizing the model results and conducting a series of real vehicle battery SOH aging experiments for training and improving the model proposed in this paper are also the research focus of our next work.

5. Conclusions

In this paper, an improved CNN-BIGRU-ATTENTION model based on the Hiking Optimization Algorithm (HOA) is proposed and validated on the NASA dataset and the Maryland University SOH (state of health) dataset. The main findings of this method are as follows:
In this study, a CNN-BIGRU modeling optimization method based on the Hiker Optimization Algorithm (HOA) is proposed, which significantly improves the convergence speed, training stability, and prediction performance of the model through the adaptive parameter tuning mechanism of HOA, thus providing a new solution for the parameter optimization of complex models.
In this study, eight basic feature factors are extracted from NASA public datasets and the Maryland University dataset, including four key feature factors for characterizing the state of health (SOH) of lithium-ion batteries, which enhances SOH characterization in a multi-dimensional and highly discriminative manner. These optimized feature sets are used as input data for the HOA-CNN-BIGRU-Attention model, aiming to improve the model’s prediction accuracy of true battery capacity. This feature engineering approach not only enriches the physical meaning of the input data by combining the base features with the higher-order information of the IC curves, but also significantly enhances the model’s ability to characterize battery degradation patterns, thus providing a more reliable basis for the accurate estimation of SOH.
In this study, the Attention mechanism is introduced into the CNN-BIGRU modeling architecture, which effectively solves the three key problems of the original model, as follows, by establishing a dynamic weight allocation strategy and explicit dependency modeling: (1) redundant information interference in the high-dimensional feature space; (2) insufficient modeling of long time series dependencies; and (3) the sensitivity of the input noise is too high. The experimental results show that the improvement not only significantly improves the model performance (specific quantitative indexes need to be added), but also enhances the transparency and interpretability of the model decision-making process through visual analysis of the attention weights, providing a more robust deep learning solution for time series signal processing tasks.
Looking ahead, future research should focus on enhancing the interpretability and robustness of hybrid deep learning models like HOA-CNN-BiGRU-Attention through explainable AI (XAI) techniques, while exploring advanced metaheuristic optimizers (e.g., quantum-inspired or transfer learning-based algorithms) for high-dimensional hyperparameter spaces. Investigations into multi-modal data fusion incorporating thermal, electrochemical impedance spectroscopy (EIS), and mechanistic degradation models could further improve SOH prediction generalizability across diverse battery chemistries and operating conditions. Additionally, developing lightweight variants optimized for edge-computing deployment in battery management systems represents a critical direction for real-world industrial adoption.
Overall, the present method achieves the accurate estimation of the SOH of Li-ion batteries in different cases. In the next work, the model will be simplified to reduce the consumption of computational resources while guaranteeing the prediction accuracy.

Author Contributions

Conceptualization, Q.D.; methodology, Q.D.; software, Z.L.; validation, H.W.; formal analysis, Z.L. and R.D.; investigation, Q.D.; data curation, Z.L. and R.D.; writing—original draft preparation, Q.D.; writing—review and editing, L.W. and L.L.; visualization, H.W.; supervision, R.D.; project administration, L.W.; funding acquisition, L.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key Research and Development Program of China, grant number 2023YFB2406000, Scientific and Technological Research Project of the Hubei Provincial Department of Education, grant number D20231405 and Knowledge Innovation Program of Wuhan-Shuguang Project, grant number 2023010201020372.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Acknowledgments

The author is grateful for the review and guidance of the reviewing experts. At the same time, the author would also like to thank the Hubei Key Laboratory for High-efficiency Utilization of Solar Energy and Operation Control of Energy Storage System for its support.

Conflicts of Interest

Authors Chunxia Fu and Lujun Wang were employed by the Powerchina Equipment Research Institute Co., Ltd. Author Yunbin Zhang was employed by the company Hubei New Energy Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Ruan, J.; Song, Q.; Yang, W. The application of hybrid energy storage system with electrified continuously variable transmission in battery electric vehicle. Energy 2019, 183, 315–330. [Google Scholar] [CrossRef]
  2. Ren, H.; Zhao, Y.; Chen, S.; Wang, T. Design and implementation of a battery management system with active charge balance based on the SOC and SOH online estimation. Energy 2019, 166, 908–917. [Google Scholar] [CrossRef]
  3. Li, C.; Zhang, H.; Ding, P.; Yang, S.; Bai, Y. Deep feature extraction in lifetime prognostics of lithium-ion batteries: Advances, challenges and perspectives. Renew. Sustain. Energy Rev. 2023, 184, 113576. [Google Scholar] [CrossRef]
  4. Jiang, S.; Song, Z. A review on the state of health estimation methods of lead-acid batteries. J. Power Sources 2022, 517, 230710. [Google Scholar] [CrossRef]
  5. Galeotti, M.; Cinà, L.; Giammanco, C.; Cordiner, S.; Di Carlo, A. Performance analysis and SOH (state of health) evaluation of lithium polymer batteries through electrochemical impedance spectroscopy. Energy 2015, 89, 678–686. [Google Scholar] [CrossRef]
  6. Tao, J.; Wang, S.; Cao, W.; Takyi-Aninakwa, P.; Fernandez, C.; Guerrero, J.M. A comprehensive review of state-of-charge and state-of-health estimation for lithium-ion battery energy storage systems. Ionics 2024, 30, 5903–5927. [Google Scholar] [CrossRef]
  7. Zhang, W.; Ma, F.; Guo, S.; Chen, X.; Zeng, Z.; Wu, Q.; Li, S.; Cheng, S.; Xie, J. A model cathode for mechanistic study of organosulfide electrochemistry in Li-organosulfide batteries. J. Energy Chem. 2021, 66, 440–447. [Google Scholar] [CrossRef]
  8. Zhang, W.; Wang, D.; Zheng, W. A semiconductor-electrochemistry model for design of high-rate Li ion battery. J. Energy Chem. 2020, 41, 100–106. [Google Scholar] [CrossRef]
  9. Tian, N.; Wang, Y.; Chen, J.; Fang, H. One-shot parameter identification of the Thevenin’s model for batteries: Methods and validation. J. Energy Storage 2020, 29, 101282. [Google Scholar] [CrossRef]
  10. Tian, H.; Qin, P.; Li, K.; Zhao, Z. A review of the state of health for lithium-ion batteries: Research status and suggestions. J. Clean. Prod. 2020, 261, 120813. [Google Scholar] [CrossRef]
  11. Chen, K.; Luo, Y.; Long, Z.; Li, Y.; Nie, G.; Liu, K.; Xin, D.; Gao, G.; Wu, G. Big data-driven prognostics and health management of lithium-ion batteries: A review. Renew. Sustain. Energy Rev. 2025, 214, 115522. [Google Scholar] [CrossRef]
  12. Luo, K.; Chen, X.; Zheng, H.; Shi, Z. A review of deep learning approach to predicting the state of health and state of charge of lithium-ion batteries. J. Energy Chem. 2022, 74, 159–173. [Google Scholar] [CrossRef]
  13. Wu, L.; Chen, C.; Li, Z.; Chen, Z.; Li, H. The Joint Estimation of SOC-SOH for Lithium-Ion Batteries Based on BiLSTM-SA. Electronics 2024, 14, 97. [Google Scholar] [CrossRef]
  14. Zhang, J.; Li, K. State-of-Health Estimation for Lithium-Ion Batteries in Hybrid Electric Vehicles—A Review. Energies 2024, 17, 5753. [Google Scholar] [CrossRef]
  15. Li, Y.; Luo, L.; Zhang, C.; Liu, H. State of Health Assessment for Lithium-Ion Batteries Using Incremental Energy Analysis and Bidirectional Long Short-Term Memory. World Electr. Veh. J. 2023, 14, 188. [Google Scholar] [CrossRef]
  16. Ning, J.; Xiao, B.; Zhong, W. Recognition of State of Health Based on Discharge Curve of Battery by Signal Temporal Logic. World Electr. Veh. J. 2025, 16, 127. [Google Scholar] [CrossRef]
  17. Marinković, D.; Dezső, G.; Milojević, S. Application of machine learning during maintenance and exploitation of electric vehicles. Adv. Eng. Lett. 2024, 3, 132–140. [Google Scholar] [CrossRef]
  18. Liu, K.; Tang, X.; Teodorescu, R.; Gao, F.; Meng, J. Future Ageing Trajectory Prediction for Lithium-Ion Battery Considering the Knee Point Effect. IEEE Trans. Energy Convers. 2021, 37, 1282–1291. [Google Scholar] [CrossRef]
  19. Klass, V.; Behm, M.; Lindbergh, G. A support vector machine-based state-of-health estimation method for lithium-ion batteries under electric vehicle operation. J. Power Sources 2014, 270, 262–272. [Google Scholar] [CrossRef]
  20. Chen, L.; Wang, H.; Liu, B.; Wang, Y.; Ding, Y.; Pan, H. Battery state-of-health estimation based on a metabolic extreme learning machine combining degradation state model and error compensation. Energy 2021, 215, 119078. [Google Scholar] [CrossRef]
  21. Qu, W.; Shen, W.; Liu, J. A joint grey relational analysis based state of health estimation for lithium ion batteries considering temperature effects. J. Energy Storage 2021, 42, 103102. [Google Scholar] [CrossRef]
  22. Kim, S.; Lee, P.-Y.; Lee, M.; Kim, J.; Na, W. Improved State-of-health prediction based on auto-regressive integrated moving average with exogenous variables model in overcoming battery degradation-dependent internal parameter variation. J. Energy Storage 2022, 46, 103888. [Google Scholar] [CrossRef]
  23. Fei, C.; Lu, Z.; Jiang, W.; Zhao, L.; Zhang, F. Research on Lithium-Ion Battery State of Health Prediction Based on XGBoost–ARIMA Joint Optimization. Batteries 2025, 11, 207. [Google Scholar] [CrossRef]
  24. Fan, T.-E.; Li, H.-Y.; Feng, F. Deep mining of temporal information for lithium-ion battery health status assessment based on wavelet-time-frequency fusion model and multi-attention mechanism. J. Energy Storage 2025, 130, 117396. [Google Scholar] [CrossRef]
  25. Sangiri, J.B.; Kulshreshtha, T.; Ghosh, S.; Maiti, S.; Chakraborty, C. A novel methodology to estimate the state-of-health and remaining-useful-life of a Li-ion battery using discrete Fourier transformation. J. Energy Storage 2022, 46, 103849. [Google Scholar] [CrossRef]
  26. Liu, C.; Li, H.; Li, K.; Wu, Y.; Lv, B. Deep Learning for State of Health Estimation of Lithium-Ion Batteries in Electric Vehicles: A Systematic Review. Energies 2025, 18, 1463. [Google Scholar] [CrossRef]
  27. Cai, X.; Liu, T. State of Health Prediction for Lithium-Ion Batteries Using Transformer–LSTM Fusion Model. Appl. Sci. 2025, 15, 3747. [Google Scholar] [CrossRef]
  28. Chen, B.; Zhang, Y.; Wu, J.; Yuan, H.; Guo, F. Lithium-Ion Battery State of Health Estimation Based on Feature Reconstruction and Transformer-GRU Parallel Architecture. Energies 2025, 18, 1236. [Google Scholar] [CrossRef]
  29. Zhang, Y.; Wang, A.; Zhang, C.; He, P.; Shao, K.; Cheng, K.; Zhou, Y. State-of-Health Estimation for Lithium-Ion Batteries via Incremental Energy Analysis and Hybrid Deep Learning Model. Batteries 2025, 11, 217. [Google Scholar] [CrossRef]
  30. Zhang, Y.; Li, Y.-F. Prognostics and health management of Lithium-ion battery using deep learning methods: A review. Renew. Sustain. Energy Rev. 2022, 161, 112282. [Google Scholar] [CrossRef]
  31. Ma, Y.; Shan, C.; Gao, J.; Chen, H. A novel method for state of health estimation of lithium-ion batteries based on improved LSTM and health indicators extraction. Energy 2022, 251, 123973. [Google Scholar] [CrossRef]
  32. Liu, G.; Zhang, X.; Liu, Z. State of health estimation of power batteries based on multi-feature fusion models using stacking algorithm. Energy 2022, 259, 124851. [Google Scholar] [CrossRef]
  33. Bao, Z.; Jiang, J.; Zhu, C.; Gao, M. A New Hybrid Neural Network Method for State-of-Health Estimation of Lithium-Ion Battery. Energies 2022, 15, 4399. [Google Scholar] [CrossRef]
  34. Oladejo, S.O.; Ekwe, S.O.; Mirjalili, S. The Hiking Optimization Algorithm: A novel human-based metaheuristic approach. Knowledge-Based Syst. 2024, 296, 111880. [Google Scholar] [CrossRef]
  35. Tobler, W.R. Non-isotropic geographic modeling. In Three Presentations on Geographical Analysis and Modeling; Doe, J., Ed.; National Center for Geographic Information and Analysis: Santa Barbara, CA, USA, 1993; pp. 30–40. Available online: https://escholarship.org/uc/item/05r820mz (accessed on 20 July 2025).
  36. Brauwers, G.; Frasincar, F. A General Survey on Attention Mechanisms in Deep Learning. IEEE Trans. Knowl. Data Eng. 2021, 35, 3279–3298. [Google Scholar] [CrossRef]
  37. Derrac, J.; García, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
  38. Derrac, J.; García, S.; Hui, S.; Suganthan, P.N.; Herrera, F. Analyzing convergence performance of evolutionary algorithms: A statistical approach. Inf. Sci. 2014, 289, 41–58. [Google Scholar] [CrossRef]
Figure 1. Article workflow diagram.
Figure 1. Article workflow diagram.
Wevj 16 00487 g001
Figure 2. Illustration of the Hiking Optimization Algorithm.
Figure 2. Illustration of the Hiking Optimization Algorithm.
Wevj 16 00487 g002
Figure 3. CNN-BIGRU-Attention model.
Figure 3. CNN-BIGRU-Attention model.
Wevj 16 00487 g003
Figure 4. Four key characterization factors.
Figure 4. Four key characterization factors.
Wevj 16 00487 g004aWevj 16 00487 g004b
Figure 5. Comparison of RSMEs of the two algorithms on four data sets. (a) Model B0005 Accuracy Comparison; (b) Model B0006 Accuracy Comparison; (c) Model B0007 Accuracy Comparison; (d) Model B0018 Accuracy Comparison.
Figure 5. Comparison of RSMEs of the two algorithms on four data sets. (a) Model B0005 Accuracy Comparison; (b) Model B0006 Accuracy Comparison; (c) Model B0007 Accuracy Comparison; (d) Model B0018 Accuracy Comparison.
Wevj 16 00487 g005
Figure 6. B0005-SOH fit comparison folding plot.
Figure 6. B0005-SOH fit comparison folding plot.
Wevj 16 00487 g006
Figure 7. B0006-SOH fit comparison folding plot.
Figure 7. B0006-SOH fit comparison folding plot.
Wevj 16 00487 g007
Figure 8. B0007-SOH fit comparison folding plot.
Figure 8. B0007-SOH fit comparison folding plot.
Wevj 16 00487 g008
Figure 9. B0018-SOH fit comparison line plot.
Figure 9. B0018-SOH fit comparison line plot.
Wevj 16 00487 g009
Figure 10. CS2-35 SOH fit comparison folding plot.
Figure 10. CS2-35 SOH fit comparison folding plot.
Wevj 16 00487 g010
Figure 11. CS2-36 SOH fit comparison folding plot.
Figure 11. CS2-36 SOH fit comparison folding plot.
Wevj 16 00487 g011
Figure 12. CS2-37 SOH fit comparison folding plot.
Figure 12. CS2-37 SOH fit comparison folding plot.
Wevj 16 00487 g012
Figure 13. CS2-38 SOH fit comparison line plot.
Figure 13. CS2-38 SOH fit comparison line plot.
Wevj 16 00487 g013
Figure 14. Predict the SOH of CS2-35 using the aging data of CS2-36, 37, and 38.
Figure 14. Predict the SOH of CS2-35 using the aging data of CS2-36, 37, and 38.
Wevj 16 00487 g014
Table 1. Introduction to the English abbreviations cited in the text.
Table 1. Introduction to the English abbreviations cited in the text.
English AbbreviationsDefinition
CVRTThe time interval required for the battery voltage to rise from 3.7 V to 4.2 V during the constant-voltage (CV) phase of the standard CC-CV charging protocol.
CVFTThe time interval during which the terminal voltage decays from 4.1 V to 3.2 V under constant-current discharge conditions.
CCDTThe duration of the discharge phase where the magnitude of the discharge current exceeds 1.5 A (|I| > 1.5 A) under constant-current conditions.
CCCTThe time interval from the start of charging until the charging current drops to a threshold of 1.49 A at the transition to the CV phase.
Table 2. The meanings of the mathematical letters in the HOA algorithm equation.
Table 2. The meanings of the mathematical letters in the HOA algorithm equation.
SymbolDescription
i,IAn individual hiker, and total number of hikers
TNumber of maximum function evaluations
tAn iteration or time
wi,tHiker i velocity in km/h
Si,tSlope of the trail
θi,tAngle of inclination of the trail experienced at t by hiker t
γi,tUniformly distributed random variable [0, 1]
αi,tSweep Factor (SF) of the hiker i lies in [1, 3]
βi,tCurrent position of hiker i at time t
βbestPosition of the lead hiker
βi,t+1Updated position of the lead hiker
jUniformly distributed random variable [0, 1]
A1jLower bound of the decision variables
A2jUpper bound of the decision variables
Table 3. The pseudocode for the operation of the hiking optimization algorithm.
Table 3. The pseudocode for the operation of the hiking optimization algorithm.
AlgorithmHiking Optimization Algorithm
1for i = 1 to pop do
2for j = 1 to dim do
3δ_j ← U(0,1)
4β_{i,0,j} = lb_j + δ_j × (ub_j − lb_j) // Equation (5)
5fit_i = fobj(β_{i,0})
6for t = 1 to maxIter do
7best_index ← argmin(fit)
8Xbest ← β_{best_index} //β_{bx,t} in Equation (3)
9for i = 1 to pop do
10θ_{i,t} ← rand(0,50)
11s_{i,t} = tan(θ_{i,t} × π/180) // Equation (2)
12w_{i,t} = 6 × exp(-3.5 × |s_{i,t} + 0.05|) // Equation (1)
13α_{i,t} ← randint(1,2)
14γ_{i,t} ← U(0,1)
15newVelocity = w_{i,t} + γ_{i,t} × (Xbest − α_{i,t} × β_{i,t-1}) // Equation (3)
16newPosition = β_{i,t-1} + newVelocity // Equation (4)
17newPosition = max(lb, min(ub, newPosition))
18newFit = fobj(newPosition)
19if newFit < fit_i then
20β_{i,t} = newPosition
21fit_i = newFit
22end if
23Iter_curve[t] = min(fit)
24Xbest = β_{argmin(fit), maxIter}
25Best_fitness = min(fit)
26End
Table 4. Pseudocode for the relationship between CNN and the attention mechanism.
Table 4. Pseudocode for the relationship between CNN and the attention mechanism.
AlgorithmCNN-BIGRU-ATTENTION
1Input: X ∈ R^{N × T × F}
2H_folded = Fold(X) //CNN feature extraction
3H_conv1 = Conv2D(H_folded)
4H_conv2 = ReLU(Conv2D(H_conv1))
5z = GlobalAveragePooling(H_conv2)//ATTENTION (10)–(12)
6g = FC_2(ReLU(FC_1(z)))
7β = Sigmoid(g)
8H_att = β × H_conv2
9H_unfolded = Unfold(H_att)//BIGRU
10H_forward = GRU(H_unfolded)
11H_backward = GRU(Reverse(H_unfolded))
12H_bi = Concatenate(H_forward, H_backward)
13Y_pred = FC(H_bi)//Output
Table 5. Correlation coefficients of selected characterization factors.
Table 5. Correlation coefficients of selected characterization factors.
HFHF1HF2HF4HF5
CVRTCCDTCVFTCCCT
Spearman−0.9180.9130.995−0.923
Pearson−0.7530.7490.995−0.761
HFHF6HF7HF8
ICMICPVT-max(D)
Spearman0.823−0.712−0.721
Pearson0.744−0.609−0.695
Table 6. Detailed structure of HOA-CNN-BIGRU-ATTENTION.
Table 6. Detailed structure of HOA-CNN-BIGRU-ATTENTION.
Layer TypeNameActivationOutput ShapeNotes
Sequence Inputsequence [f_1, 1]Dimension of the input sequence
Sequence Foldingseqfold [f_, 1, 1, 1]Convert the sequence data into a 4D format suitable for 2D convolution
2D Convolutionconv_1 [f_−2, 1, 32]Kernel [3 × 1], 32 channels
ReLUrelu_1ReLU[f_−2, 1, 32]
2D Convolutionconv_2 [f_−4, 1, 64]Kernel [3 × 1], 64 channels
ReLUrelu_2ReLU[f_−4, 1, 64]
Global Average Poolinggapool [1, 1, 64]Dimensional compression in space
Fully Connectedfc_2 [16]The first layer of SE attention mechanism
ReLUrelu_3ReLU[16]
Fully Connectedfc_3 [64]The second layer of SE attention mechanism
SigmoidsigmoidSigmoid[64]Generate attention weights
Multiplicationmultiplication [f_−4, 1, 64]
Sequence Unfoldingsequnfold [f_, 64]Restore sequence format
Flattenflatten [f_*64]
GRUgruTanh/Sigm[best_hd]Positive GRU
Flipflip [f_*64]Sequence reversal
GRUgru2Tanh/Sigm[best_hd]Bidirectional GRU
Concatenationcat [2*best_hd]
Fully Connectedfc [outdim]Output dimension
Regression Outputregressionoutput [outdim]
Table 7. Optimization algorithm hyperparameter settings.
Table 7. Optimization algorithm hyperparameter settings.
ParametersSearch_AgentsMax_iterationDimLearning Rate1Learning Rate2Hide Nodes1Hide Nodes2
figure40131 × 10−61 × 10−2416
best-hdbest-1rbest-12
Best_pos (1, 2)Best_pos (1, 1)Best_pos (1, 3)
Table 8. Evaluation metrics for three sets of comparison experiments with different battery from NASA dataset.
Table 8. Evaluation metrics for three sets of comparison experiments with different battery from NASA dataset.
MethodsBattery IDRMSE R 2 RPD
HOA-CNN-BIGRU-ATTENTIONB00050.006890.9697.55
B00060.018470.9225.33
B00070.008570.9485.72
B00180.008050.9164.02
SSA-CNN-BIGRU-ATTENTIONB00050.011050.9194.58
B00060.025050.8573.69
B00070.009380.9384.74
B00180.009420.8852.96
HOA-CNN-BILSTM-ATTENTIONB00050.009440.9415.51
B00060.019830.9106.45
B00070.009300.9394.16
B00180.012870.7862.18
CNN-BIGRU-ATTENTIONB00050.012020.9056.38
B00060.302270.7922.79
B00070.010250.9265.28
B00180.010870.8472.71
Table 9. Evaluation metrics for three sets of comparison experiments with the University of Maryland dataset.
Table 9. Evaluation metrics for three sets of comparison experiments with the University of Maryland dataset.
MethodsBattery IDRMSE R 2
HOA-CNN-BIGRU-ATTENTIONCS2-350.002010.99922
CS2-360.005890.98113
CS2-370.001910.99826
CS2-380.001270.99856
CNN-KANCS2-350.003770.99729
CS2-360.007170.97202
CS2-370.002870.9961
CS2-380.001680.99749
BIGRUCS2-350.0160.95093
CS2-360.006840.97455
CS2-370.004760.9892
CS2-380.007040.95606
Table 10. Pseudo-code for calculating the p-value in non-parametric tests.
Table 10. Pseudo-code for calculating the p-value in non-parametric tests.
PythonPseudo-Code for Calculating the p-Value in Non-Parametric Tests
1chi2-stat = 24//XF2 = 24
2df = 3//Degree of freedom k − 1 = 3
3p_value = chi2.sf(chi2_stat, df)//Calculate the p-value
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dong, Q.; Liu, Z.; Wang, H.; Wang, L.; Dong, R.; Lv, L. SOH Estimation of Lithium Battery Under Improved CNN-BIGRU-Attention Model Based on Hiking Optimization Algorithm. World Electr. Veh. J. 2025, 16, 487. https://doi.org/10.3390/wevj16090487

AMA Style

Dong Q, Liu Z, Wang H, Wang L, Dong R, Lv L. SOH Estimation of Lithium Battery Under Improved CNN-BIGRU-Attention Model Based on Hiking Optimization Algorithm. World Electric Vehicle Journal. 2025; 16(9):487. https://doi.org/10.3390/wevj16090487

Chicago/Turabian Style

Dong, Qianli, Ziyang Liu, Hainan Wang, Lujun Wang, Rui Dong, and Lu Lv. 2025. "SOH Estimation of Lithium Battery Under Improved CNN-BIGRU-Attention Model Based on Hiking Optimization Algorithm" World Electric Vehicle Journal 16, no. 9: 487. https://doi.org/10.3390/wevj16090487

APA Style

Dong, Q., Liu, Z., Wang, H., Wang, L., Dong, R., & Lv, L. (2025). SOH Estimation of Lithium Battery Under Improved CNN-BIGRU-Attention Model Based on Hiking Optimization Algorithm. World Electric Vehicle Journal, 16(9), 487. https://doi.org/10.3390/wevj16090487

Article Metrics

Back to TopTop