Next Article in Journal
Lifetime Limitations in Multi-Service Battery Energy Storage Systems
Previous Article in Journal
A Modeling Toolkit for Comparing AC and DC Electrical Distribution Efficiency in Buildings
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-State Online Estimation of Lithium-Ion Batteries Based on Multi-Task Learning

School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou 014010, China
*
Author to whom correspondence should be addressed.
Energies 2023, 16(7), 3002; https://doi.org/10.3390/en16073002
Submission received: 9 February 2023 / Revised: 19 March 2023 / Accepted: 21 March 2023 / Published: 25 March 2023

Abstract

:
Deep learning-based state estimation of lithium batteries is widely used in battery management system (BMS) design. However, due to the limitation of on-board computing resources, multiple single-state estimation models are more difficult to deploy in practice. Therefore, this paper proposes a multi-task learning network (MTL) combining a multi-layer feature extraction structure with separated expert layers for the joint estimation of the state of charge (SOC) and state of energy (SOE) of Li-ion batteries. MTL uses a multi-layer network to extract features, separating task sharing from task-specific parameters. The underlying LSTM initially extracts time-series features. The separated expert layer, consisting of task-specific and shared experts, extracts features specific to different tasks and shared features for multiple tasks. The information extracted by different experts is fused through a gate structure. Tasks are processed based on specific and shared information. Multiple tasks are trained simultaneously to improve performance by sharing the learned knowledge with each other. SOC and SOE are estimated on the Panasonic dataset, and the model is tested for generalization performance on the LG dataset. The Mean Absolute Error (MAE) values for the two tasks are 1.01% and 0.59%, and the Root Mean Square Error (RMSE) values are 1.29% and 0.77%, respectively. For SOE estimation tasks, the MAE and RMSE values are reduced by 0.096% and 0.087%, respectively, when compared with single-task learning models. The MTL model also achieves reductions of up to 0.818% and 0.938% in MAE and RMSE values, respectively, compared to other multi-task learning models. For SOC estimation tasks, the MAE and RMSE values are reduced by 0.051% and 0.078%, respectively, compared to single-task learning models. The MTL model also outperforms other multi-task learning models, achieving reductions of up to 0.398% and 0.578% in MAE and RMSE values, respectively. In the process of simulating online prediction, the MTL model consumes 4.93 ms, which is less than the combined time of multiple single-task learning models and almost the same as that of other multi-task learning models. The results show the effectiveness and superiority of this method.

1. Introduction

In recent years, environmental pollution has become a growing problem, and the cost of oil has become increasingly expensive. A new type of electric vehicle (EV) is gradually being explored as an alternative to conventional fuel vehicles [1]. Electric vehicles versus conventional fuels are more resourceful and less costly to maintain. They support governments’ comprehensive agendas to address greenhouse gas emissions as a cleaner form of personal transport. However, for many years, mileage anxiety, charging anxiety, and battery anxiety have been the three major obstacles to the future development of electric vehicles, and electric vehicles have always struggled with these three obstacles. Internal combustion engines and electric motors will continue to coexist for many years because it is difficult to allay the three main concerns. Lithium-ion batteries have evolved into the standard power source for electric vehicle equipment in this process [2]. They have a high energy density, a low self-discharge rate, almost zero memory effect, a high open circuit voltage, and a long life. Graphite is widely used as the carbon component in the electrodes of lithium-ion batteries. However, it is a difficult material to obtain. Researchers have found that carbon from plants has potential for use in supercapacitors. Banana peels, grass, and water spinach (Ipomoea Aquatica) can all be used as graphite substitutes. They are more accessible, environmentally friendly and cheaper [3]. Lithium-ion batteries will be more widely used in the future as people continue to explore the potential applications of biomass in batteries. However, lithium-ion batteries cannot monitor their own health status or remaining power. They do not have system control or real-time human–computer interaction. In order to monitor battery status, control battery charging and discharging, ensure that the battery is not short-circuited, check battery health, and interact with the entire vehicle’s information [4], the BMS is created. Battery SOC describes the remaining battery capacity. It is one of the key indicators for assessing the stability and safety of lithium batteries under current operating conditions. Accurate SOC estimation can help to avoid overcharging or discharging [5], avoid explosions caused by thermal reactions, and provide safety assurance to the user, and it is the foundation for the majority of BMS decisions. The SOE [6] measures the ratio between the battery’s maximum and current available energy. It is important reference information for the energy management, distribution, and control of the whole vehicle. It reflects more effectively the influence of the current moment and previous charging and discharging conditions of the battery. It is more suitable for predicting range parameters, such as the range of the EV [7], to relieve the driver’s mileage anxiety. In addition, accurate knowledge of the SOE value can facilitate the BMS’s ability to develop more reasonable energy control strategies and optimize the performance of EV energy control. Thus, the EV range will be extended and battery energy usage will increase, both of which are crucial for EV economy improvement.
SOC and SOE are not real entities and cannot be measured directly by devices such as sensors. People mostly use integral methods [8], open-circuit voltage methods [9], model-based methods, etc. These methods, however, are heavily inspired by ambient temperature, aging, and sensor errors. The electrochemical model method [10,11] and equivalent circuit model (ECM) [12,13,14] have high requirements on relevant parameters of model composition. They require painstaking experiments and deep battery research by experts in the field. These techniques are inadequate for the complicated, dynamically shifting real-world situations of electric vehicles. Recently, with the arrival of the big data era, DL has made significant accomplishments in areas such as images and speech. It has powerful automatic feature extraction capabilities and advantages in handling high-dimensional and non-linear data. With the rapid development of the intelligent auto industry, many researchers are also concentrating on applying deep learning algorithms to battery status estimation. The data-driven method based on deep learning does not require a deep study of the internal chemical reactions’ characteristics. Training can achieve fast and efficient estimation with a large amount of data. Accurate estimates can guarantee user experience and security. Long Short-Term Memory (LSTM) networks have been used to estimate the battery SOC. Yang et al. [15] proposed using LSTM networks to simulate the complex dynamics of lithium batteries to estimate SOC. They compared it with a model-based approach in terms of computation time and an unknown initial SOC. The model obtains more accurate estimates than the Unscented Kalman Filter (UKF) [16], with RMSE and MAE within 2% and 1% of incorrect initial SOC values, respectively. Bian et al. [17] combined a bi-directional LSTM network with a codec structure to estimate the battery SOC. The method has an MAE of only 1.07% at varying temperature conditions and improved the MAE by 12% and 16% at 25 °C compared to GRU-ED and LSTM-ED. In addition to this, estimation using convolutional modules and attentional mechanisms has also been explored. Wang et al. [18] used a convolution module to achieve a more accurate estimation of SOC. They superimposed multiple measurable variables over a period of time to serve as model inputs, combining process information and interrelationships generated by the voltage or current. The MAE and RMSE values are 1.260% and 0.998%, respectively. Yang et al. [19] proposed a deep learning method based on a two-stage attention mechanism. The method effectively reduces the effect of noise on SOC estimation. They used lithium battery domain knowledge such as current, voltage, and temperature as features to input into a coder-based network of gated recurrent units. The attention mechanism was used for preprocessing in the encoder. In the decoder stage, another attention mechanism was used to consider the correlation of time series with reference to the time-scale state of the previous encoder. The MAE value is less than 0.5% in the experimental results. Meanwhile, for battery SOE estimation, researchers have explored different approaches when using neural network methods. Liu et al. [20] proposed a direct SOE estimation method based on an improved BPNN under dynamic current and temperature conditions. However, this method is an open-loop estimation, and the estimation accuracy is poor due to incorrect measured values of battery parameters. Wang et al. [21] developed a sliding window neural network model to describe the voltage response of lithium-ion batteries (LIBs) under current and temperature excitation. They used a Monte Carlo sampling method based on a Bayesian probabilistic learning framework to estimate the SOE of the LIBs. There are also researchers who start with an analysis of the battery energy state problem and first predict future operating conditions to achieve an accurate SOE estimate. Liu et al. [22] proposed a driving condition identification algorithm based on information entropy theory. They applied Markov Chain theory to construct a driving condition prediction algorithm and established an electric vehicle system model. Then, they simulated obtaining the predicted battery operating conditions that correspond to the predicted driving conditions. The final SOE estimation based on EV working condition identification and prediction was achieved. Ren et al. [23] proposed an SOE estimation method based on future average power predictions. The future SOC sequence, voltage sequence, and temperature sequence were coupled to obtain the future prediction of the SOC, voltage sequence, and temperature sequence depending on the moving average method to collect the historical load. The battery’s SOE was then determined by adding up the voltage and capacity sequences. The prediction-based approach can precisely estimate the battery SOE and takes into account the impact of potential LIB loads. The key to this approach, however, is how to accurately predict the complex operating conditions of future batteries.
The existing research on the separate state has achieved milestones. They combined an attention mechanism, filter [24], and other technologies to achieve relatively high precision estimation under multi-operating conditions. However, we also find that most studies used model fusion to obtain higher estimates using more complex estimation models. The estimation of multiple states requires multiple complex single-state models. This undoubtedly increases the difficulty of building models into EV controllers for practical applications. The multi-task learning model that performs well in other fields cannot be directly applied to state estimation in the battery field. Fewer studies have discussed their model loading costs in detail. There is a lack of research on how to balance on-board computational conditions with high accuracy estimates and how to achieve multi-state joint battery estimation.
In order to better meet the needs of practical applications and achieve accurate battery state estimation with limited on-board computing resources, in this paper, a multi-task learning network combining multi-layer extraction structure and a separated expert layer is proposed for the first time for the multi-state joint online estimation of SOE and SOC of lithium batteries. MTL in this paper adopts a multi-layer extraction structure. It separates task sharing from specific task parameters. The underlying LSTM initially extracts time-series features. The separated expert layer extracts specific features and shared features from multiple tasks. The final result is the joint multi-state online estimation of SOE and SOC for lithium batteries. A Panasonic dataset is used to simulate the processes of off-line training and online prediction. MTL improves estimation accuracy and reduces the computing resources required for multi-state estimation. We also compared our model with single-task estimation models and other multi-task estimation models and conducted generalization performance tests on other datasets. The effectiveness and superiority of this method are proven by experiments.
The paper is organized as follows: Section 2 describes the relevant problem and the current state of the art in multi-state estimation and multi-task learning for batteries; Section 3 describes the model structure and optimization objectives in detail; Section 4 gives a description of the specific experimental design and a comparison and discussion of the experimental results; Section 5 draws conclusions. Section 6 provides a discussion of future trends and remaining issues.

2. Related Work

This section gives a description of the problem and the current state of research in joint multi-state estimation and multi-task learning for batteries.

2.1. Combined Multi-State Estimation of Batteries

The physical parameter for a vehicle’s lithium battery can be measured by sensors, while the battery’s state cannot be directly measured. The ability of neural networks to capture non-linear relationships is used to investigate the measurable values obtained during the discharge of lithium-ion batteries and the corresponding battery state values. Based on this, the model is used to directly provide accurate estimates of SOC and SOE. We define the data used in multi-state estimation as D = { ( V i , I i , T i ) , ( S O C i , S O E i ) } i = 1 N , V i , I i and T i indicate voltage, current, and temperature, respectively. Research has also been conducted on the joint estimation [25] of SOC and SOE. Zheng et al. [26] proposed a joint SOC and SOE estimation framework with high accuracy and robustness. However, similar joint estimation methods require accurate SOC as input. If there is an error in the SOC estimation, this will lead to an error in the SOE estimation. Ma et al. [27] estimated SOC and SOE simultaneously based on the deep neural network of the LSTM. The proposed algorithm was verified by two dynamic driving cycles under different operating conditions. Zhang et al. [28] proposed a non-experimental-based reconstruction of the relationship between open-circuit voltage (OCV) and SOC/SOE. They did not require additional cell tests. They tested it on an open-source dataset and on their own data. On the LG dataset, the MAE and RMSE of the SOC/SOE estimates can be roughly limited to less than 1%. Prashant Shrivastava et al. [29] proposed a simple combined SOC and SOE estimation method. They estimate the SOC and SOE of the battery with a low level of computational complexity. The RMSE for SOC and SOE is consistently below 1.6%. Xia et al. [30] proposed a new fusion equivalent circuit model for lithium-ion batteries that takes into account the effects of temperature. They used an adaptive noise correction-dual extended Kalman filtering algorithm to implement SOC and SOE estimation. The estimation errors are within 1.83% and 1.92% for different operating temperatures and conditions. Prashant Shrivastava et al. [31] combined the relationship between SOC and SOE for joint estimation using the dual forgetting factor adaptive extended Kalman filter (DFFAEKF) algorithm. Verification was carried out using two chemical cells under dynamic load profiles at different operating temperatures. Chen et al. [32] applied the adaptive dual fractional-order extended Kalman filter algorithm to achieve the joint estimation of SOC and SOE to solve the parameter constraint. However, these methods do not perform well in terms of estimation accuracy. At the same time, they need a great deal of expertise in the field of batteries. Li et al. [33] proposed a multi-state estimation method for lithium-ion hybrid capacitors (LIHCs). They used the LSTM and the ampere-time integration method to correct the online predicted OCV. The SOC and SOE are predicted using the correct OCV and other model parameters, with RMSE values of 2.1% and 2.3%, respectively. Zhang et al. [34] used ECM to represent battery performance and applied multi-timescale filters for battery model parameter identification and battery state prediction. Their approach has been validated in accordance with the new European driving cycle operating conditions. Liu et al. [35] proposed a mean-variance model using a multi-timescale H-infinity filter (Mts-HIF) to achieve SOC and capacity estimation for battery packs. For capacity estimation, the RMSE is within 2%. For SOC estimation, the RMSE is within 1.2%.
There are already some results about joint battery SOC and SOE state estimation, but they mostly require a deep understanding of battery domain expertise. Most of the existing deep learning-based joint estimation methods involve hard parameter sharing. However, these networks are unable to distinguish between task-specific features and shared features, and different tasks have a large impact on the model training process to provide accurate estimation of multiple tasks. There is a lack of a joint estimation method that provides a highly accurate estimation of multiple states of the battery and does not require much knowledge of the internal chemical mechanism of the battery.

2.2. Muti-Task Learning

The human brain is capable of autonomous learning for multiple tasks. It can help understand the current task by referring to useful knowledge about other tasks. Inspired by this kind of human brain function, the training model processes multiple tasks at the same time and uses the shared knowledge of different tasks to assist the learning of each task. Multi-task learning is defined as follows: given n tasks that need to be learned, { M i } i = 1 n . There is a correlation between different tasks. MTL hopes to obtain accurate results for each task by learning n tasks and the shared knowledge between tasks. It will be better to handle all tasks, achieve accurate results close to the single-task learning model, and reduce the consumption of computing resources. At present, there are several basic multi-tasking learning models, as shown in Figure 1.
Figure 1a shows the sharing bottom layer and the multi-task learning structure based on hard parameter sharing. At the bottom is the shared bottom structure (expressed as function f). Multiple tasks share this layer for feature extraction. Given K tasks and K Tower networks h k , where k = 1 , 2 . . . K . For task k, the model can be simplified as:
y k = h k ( f ( x ) )
As shown in Figure 1b, the one-gate model divides the shared bottom layer into several independent experts and controls their input into the tower network through the gate structure. Its model can be described as follows:
y = i = 1 n g ( x ) i f i ( x )
Here, i = 1 n g ( x ) i = 1 , g ( x ) i represents the weight value of expert f i , i = 1 , 2 , , n . n indicates n experts.
In Figure 1c, Google proposed a Multi-gate Mixture of Experts (MMOE) [36] in 2018. Based on the one-gate model, a gate structure is transformed into a gate structure for each task. The output of its model for task k can be described as:
y k = h k ( f k ( x ) )
f k ( x ) = i = 1 n g k ( x ) i f i ( x )
g k ( x ) is implemented by the simple linear transformation of the s o f t m a x layer.
g k ( x ) = s o f t m a x ( W g k x )
Here, W g k R n × d is a trainable parameter in the model, n is the number of experts and d is the feature dimension.
At present, some scholars have applied multi-tasking learning to practical tasks. Huang et al. [37] developed a multi-task learning model for intelligent contract vulnerability detection based on hard parameter sharing. The shared layer’s primary purpose is to learn the input contract’s semantic information. The contract feature vector is learned and extracted using an attention-based neural network. Secondly, a task-specific layer is used to implement the functionality of each task. By adding auxiliary tasks to learn more directional vulnerability traits, the model’s detection capabilities have been improved, making it easier to locate and identify flaws. Xie et al. [38] proposed a new multi-task attention directing network (MTAGN) for small-sample multi-objective fault diagnosis. MTAGN is made up of M task-specific attention networks and a task-sharing network for learning the global feature pool. With the attract module, every task-specific network is able to extract useful functionality from the task-sharing network. With multi-tasking learning, multiple tasks can be trained at the same time, and the useful knowledge learned from each task can be used by the others to improve performance. The system was created by Wuk et al. [39], utilizing the LSTM model as the sharing layer and a multi-tasking learning approach. Based on SHapley Explainable (SHap), the explainable framework combines global and local interpretation to improve the explain ability of RIES load predictions. In addition, an input variable selection strategy based on the global shap value was proposed to select the input characteristic variables of the model. Pasquale Foggia et al. [40] proposed a multi-task convolutional neural network (CNN)-based solution for real-time user analysis. It was used to identify gender, age, race, and emotion from facial images. Processing speed is increased by a factor of 2.5 to 4, and memory space is reduced by a factor of 2 to 4, while maintaining accuracy. Peng et al. [41] proposed a multi-task learning approach (MTGCN) for identifying cancer driver genes based on graph convolutional neural networks. The multi-task learning framework passes node and graph features from the input layer to the next layer to learn nodal embedding features. It has better performance than other methods in Receiver Operating Characteristic (ROC) curves and recall curves. Aung et al. [42] proposed multi-task learning to simultaneously solve the problems of assigning developers and assigning problem types. The performance of both tasks is improved by jointly interpreting problem reports. They conducted experiments on 11 open-source projects to demonstrate the effectiveness of the approach. Zhou et al. [43] proposed a multi-task learning framework for the segmentation and classification of tumors. Their framework consists of two sub-networks: an encoder–decoder for segmentation and a lightweight multi-scale network for classification. Keishi Ishihara et al. [44] proposed a novel multi-task attention-aware network that allows autonomous driving to handle more complex scenarios. This improves the success rate of their benchmark tests and also improves the ability to respond to traffic lights.
The above multi-task learning methods cannot be directly applied to battery state prediction. When current hard parameter sharing-based methods are used to estimate multiple tasks simultaneously, the network is unable to distinguish between task-specific features and features shared between different tasks, and other models fail to effectively extract the time-series features of the battery, reducing the accuracy of time-series prediction. They cannot be directly migrated to battery state estimation, and a multi-task learning model for estimating multiple states of the battery is currently lacking. This paper therefore employs a multi-task learning model (MTL) that combines a multi-layer extraction structure with separated expert layers.

3. MTL

In this section, the MTL model used is described in detail. Firstly, the input data and output of MTL are introduced. Then, the shared LSTM layer and separated expert layer in the multi-layer extraction structure of MTL are described. Next, the gate structure for combining task features and the Tower network for making regression predictions are described. Finally, it describes how the final output and the optimization objectives of the model are obtained.

3.1. Input and Output

Time-series data consisting of voltage, current, and temperature obtained during the discharge process are the samples utilized for the MTL model. Each sample is composed of three measured values and two state values of SOC and SOE. The data domain D = { X i , Y i } i = 1 N , here X i = { V i t , I i t , T i t } represents the voltage, current, and temperature of the data i. They act as network model input features. t is the time corresponding to the data i. Before each iteration, all time windows and their corresponding label values are scrambled in the same order and input into the network. The input is simply normalized, which is shown as follows:
x i = x i x m i n x m a x x m i n , x { V , I , T }
Y i = { S O C i t , S O E i t } represented as the output of the model:
S O C i = Q n o w Q m a x × 100 %
S O E i = E n o w E m a x × 100 %

3.2. Multi-Layer Extraction Structure

In MTL, the shared multi-layer extraction structure is designed to learn rich task characteristics from the input time series data. The underlying LSTM layer initially extracts the input data to obtain the time-series features. The separated expert layer is divided into task-specific experts and shared experts. Different experts will learn different task characteristics from the time sequence. Task-specific experts learn characteristics specific to each estimated task, while shared experts learn characteristics common to all tasks. This paper defines the expert feature of the output of the i-th expert as f e i .
f e i = E x p e r t ( f l ( x , θ l ) , θ e i ) i = 1 , 2 , , m
f l ( x , θ l ) is the output of the LSTM layer, θ l represents the model parameters of the LSTM layer, θ e i represents the i-th expert parameters of the separated expert layer. As shown in Figure 2, this paper constructs a lightweight multi-task learning network, which consists of a multi-layer extraction structure, separated expert layer, gate structure, and Tower network.

3.3. Gate and Tower

The separated expert layer divides task features f e into specific task features and shared features. Through the gate structure, we output a set of weights that combine the specific feature and shared feature corresponding to each task:
g i = G a t e i ( f l ( x , θ l ) , θ g i ) i = 1 , 2 , , p
The output g i is the weight value corresponding to the ith expert. The input is the time sequence feature f l ( x , θ l ) extracted from the shared LSTM layer, as shown in the following:
i = 1 p g i = 1
p is the sum of the number of task-specific experts and the number of shared experts for each task. The final task feature, namely the input of the Tower network, is:
T x = i = 1 p ( g i f e i )
The Tower network will give the corresponding state prediction value according to the input task feature T x i . Meanwhile, the activation function relu has been added to the tower to speed up training.
f r e l u = m a x ( 0 , x )

3.4. Model Prediction and Optimization

In the training process of the model, multiple loss functions for tasks need to be optimized. For n tasks, the input is X and the label is Y i , i = 1 , 2 , , n . In this paper, the loss function of the model is defined as the linear addition of each task. The task weight λ i , in this paper, n = 2 .
L ( X , Y 1 : n ) = i = 1 n λ i L i ( X , Y i ) i = 1 , 2 , , n
In this equation, L 1 ( X , Y 1 ) represents the loss of battery SOE task, and L 2 ( X , Y 2 ) represents the loss of battery SOC prediction task.
L 1 ( X , Y 1 ) = M S E ( f y ( T x 1 , θ t ) , Y 1 )
L 2 ( X , Y 2 ) = M S E ( f y ( T x 2 , θ t ) , Y 2 )
Here, f y ( T x i , θ t ) represents the predicted state value of the Tower network, and θ t represents the model parameter of the task-specific Tower network. MSE is used to calculate the loss value.
M S E = 1 n i = 1 N ( S O C i ^ S O C i ) 2
So, the final loss function is defined as:
L = λ 1 L 1 + λ 2 L 2
The value of task weight λ i has a significant impact on the model result. Different tasks have different effects on the model. In this research, we combine the specific training of the model with the weight change of the DWA dynamic weight calculation. The loss weight of two tasks is set as λ 1 = 1 , λ 2 = 1 after several experiments.

4. Experiments

In this section, the data set used are first described in terms of data recording methods, data partitioning, and data types. Next, experiments are carried out in four aspects: comparison of different multi-task learning models, comparison with single-task learning models, different loss combinations, and generalization performance to demonstrate the effectiveness and superiority of the proposed method.

4.1. Dataset and Experimental Design

The Panasonic 18650PF [45] lithium-ion battery dataset was collected by Dr. Phillip Kollmeyer at the University of Wisconsin-Madison. It includes the test data of the automobile industry standard drives US06, LA92, UDDS, NN, and HWFET at three temperatures of 0 °C, 10 °C, and 25 °C. Each temperature contains five standard cycles and four mixed cycles. Each data point contains a measured current, voltage, temperature, power, counters, etc. They were saved with a 0.1-second time step.
The LG 18650HG2 [46] lithium-ion battery dataset was collected by Dr. Phillip Kollmeyer at the University of Wisconsin-Madison. In the dataset, a brand new LG HG2 battery was tested in an 8-cubic-foot hot chamber with a 75-amp, 5-volt Digatron Firing Circuits universal battery tester channel with voltage and current accuracy of 0.1% of full scale. The dataset includes eight hybrid cycles randomly composed of US06, LA92, UDDS, and HWFET driven by automobile industry standards at three temperatures: 0 °C, 10 °C, and 25 °C. Each dataset contains the measured battery voltage, battery current, and battery temperature. They were saved with a 0.1-second time step.
Dataset forms and battery parameters are shown in Table 1, Table 2 and Table 3, respectively.
The Nan value in the Panasonic dataset is cleared first. The SOC label value and SOE label value corresponding to each timestamp are calculated according to the rated capacity, rated voltage, and discharge process. The drive cycle data of UDDS, etc. under three temperatures are taken as the training set of the model; the cycle (1, 2) under three temperatures have six random mixed drive cycles as the test set; the cycle (3, 4) under three temperatures have six random mixed drive cycles as the validation set. The current, voltage, and temperature in the original data are normalized and divided according to the time window size of 128 as the input of the model. The predicted SOC and SOE values at the time are the model’s output. The hyperparameters of the MTL model adopted are shown in Table 4.
In this paper, the learning rate is set to 0.001 and the batch_size is set to 64. At the same time, the paper comprehensively evaluates the error degree of the two estimation tasks after several experiments and sets dropout = 0.2.
After training, the testing data are used to simulate the online test. The data are input into the trained network. RMSE and MAE are used to evaluate the models’ prediction performance in an online test and visualize the error results. The two evaluation indicators are defined as follows:
R M S E = 1 N i = 1 N ( S O C i ^ S O C i ) 2
M A E = 1 N i = 1 N | S O C i S O C i ^ |
RMSE is one of the most commonly used evaluation indices for regression models. It may calculate the difference in magnitude between the model’s predicted value and the actual value. MAE is used to measure the mean absolute error. The smaller the value, the better the model. At the same time, the paper also records the time required for the model to predict. It is the average value obtained after 10 repeated experiments on the test set. Our model is trained in a Linux environment using a 12-vCPU Intel (R) Xeon (R) Platinum 8255C CPU @ 2.50 GHz and a RTX 3080, and the code is written based on PyTorch under the pycharm tool. The experimental process is shown in Figure 3.

4.2. Experiment 1: Different Multi-Task Learning Models

Experiment 1 is conducted according to the current research background and status quo. We compare the MTL with a variety of multi-task learning models. In this paper, the proposed MTL model is compared with CNN with attention mechanisms (CNN_atten), the Hard Share model (Hard_Share), Customized Gate Control (CGC), and the multi-task learning model MMOE proposed by Google.
The CNN_atten model compared in this paper is a simplified model based on SegNet [47]. Encoders are divided into three groups of convolution blocks, and the maximum pooling layer is used for downsampling between each group of convolution blocks. The decoder with a symmetric structure is also divided into three groups of convolution blocks. Bilinear interpolation and the maximum pooling position index are used for upsampling between each group of convolution blocks. Task-specific networks connect to different depths of the network. They select features of varying importance for each task from different depths. Hard_share uses LSTM with the same structure as MTL as the sharing layer. It has a similar tower network for different tasks as the task-specific network to learn the sharing features between tasks from the extracted timing features. MMOE [36] uses an expert layer composed of several experts to extract features and uses a weighted combination of gate structures to learn shared features among tasks. Based on MMOE, CGC classifies experts into task-specific experts. This preserves the ability to learn from task-specific experts.
The MTL model in this paper comprehensively considers the benefits and drawbacks of the above models. Specific features and shared features are extracted through a multi-layer extraction structure. Multiple tasks are trained at the same time and share the learned knowledge with each other to improve performance. Table 5 further shows the average error of the tests conducted. The smaller value of all these indicators indicates that the method has better performance in the current estimation task.
From the comparison results in Table 5, it can be seen that the model retains the ability to learn specific task features. This can improve the estimation accuracy of the model in the battery multi-state estimation task. The CNN_atten cannot meet the accuracy requirements for the prediction effect of time sequence state. For SOE estimation tasks, MAE and RMSE values are 1.8312% and 2.2276%, respectively; for SOC estimation tasks, MAE and RMSE values are 0.9925% and 1.3486%, respectively. They are much higher than the MTL model proposed in this paper. As the model is still complex, it takes a lot of time to simulate the online test process. The hard parameter-sharing model using only LSTM obtains the best estimation effect except for MTL, but there is still a certain gap compared with MTL. For MMOE and its improved model CGC, the optimal MAE and RMSE values for SOE estimation are 1.1113% and 1.4281%, respectively. For SOC estimation, the optimal MAE and RMSE values are 0.7038% and 0.8992%, respectively. The estimated accuracy is much lower than that of MTL. Only the test time is better than MTL model. The MTL model is more suitable for the online prediction of a battery multi-state in the case of limited on-board computing resources.
The MTL model in this paper has MAE and RMSE values of 0.5943% and 0.7709%, respectively, for the SOC estimation task. For the SOE estimation task, the MAE and RMSE values are 1.0128% and 1.2898%, respectively. We compare the results with those of other articles using the Panasonic dataset. SOC and SOE were estimated in the literature [28] at 25 ºC under UDDS operating conditions. The results are shown in Table 6.
In Table 5, we present the average results of our tests at multiple temperatures and operating conditions. The results of their tests at single temperature and single operating conditions are shown in Table 6. For SOC estimation tasks, there is a large improvement in accuracy. Both evaluation indicators improved by more than 0.4%. For the SOE estimation task, our model differs by around 0.2% compared to theirs. MTL has shown better results in comprehensive tests at multiple temperatures and operating conditions.

4.3. Experiment 2: Compare with the Single Task Model

To analyze the performance variance for the single-task learning model and confirm the efficiency of the MTL model, we compare the MTL model with the single-state estimation model in experiment 2. It is that the model only estimated the battery SOC or SOE. We ensure consistency with other model structures and experimental conditions. The same dataset is used to train the two single-state estimation models. Table 7 shows the average error of results. The smaller value of all these indicators indicates that the method has better performance in the current estimation task.
The comparison results in Table 7 show that the MTL model exhibits a better estimation effect. The errors of the two tasks are smaller than those of the corresponding single-task model. For the SOE estimation task, MAE and RMSE values decreased by 0.0961% and 0.0868%, respectively, compared with the single-task learning model; for the SOC estimation task, MAE and RMSE values decreased by 0.0513% and 0.0775%, respectively, compared with the single-task learning model. The difference between the simulated online prediction and the single task model is less than 0.5 ms. Obviously, its cost of computing resources is far less than the direct superposition of the two single-task models. It can be seen that the features extracted from the two tasks have some commonality. The MTL allows for the simultaneous extraction of specific features and sharing of information between different tasks. The MTL model’s usefulness is demonstrated by the fact that it can enhance performance by simultaneously sharing information about several tasks. The experimental results show that knowledge sharing under the multi-task learning model can lead to more accurate estimation results.

4.4. Experiment 3: Different Combinations of Losses

It is particularly important to balance the convergence rates of the loss functions of different tasks. Most of the existing methods dynamically calculate the loss weights. Dynamic Weight Average (DWA) [48] is popularly used. The definition of DWA is as follows:
λ m ( t ) = M e x p ( ω m ( t 1 ) T ) e x p ( ω i ( t 1 ) T )
ω m ( t 1 ) = L m ( t 1 ) L m ( t 2 )
Here, ω m represents the loss reduction rate of task m in each training, L m represents the average loss during a training iteration, and t refers to the number of iterations. This paper presents the changes in loss weights of two tasks in the process of model training by applying DWA dynamic weight calculation, as shown in Figure 4.
Figure 4: The horizontal coordinate is the number of iterations, and the vertical coordinate is the weight value; alpha1 and alpha2 represent, respectively, the weight of the two losses. The broken line in the figure shows the weight changes for the two tasks during the 30 iterations of training. Only at one point is there a difference of about 0.06 compared with the mean value of 1. Most of the other weight values are within the interval [ 1 0.01 , 1 + 0.01 ] , and the weight values of the two tasks always rise and fall alternately around 1. It can be seen that the tasks of battery charge state estimation and energy state estimation are not only correlated to a certain extent but also have almost the same importance and impact on the model. There is no distinguishable difference between the two tasks in terms of learning capacity or rate. Therefore, this paper directly sets the weight loss of the two tasks as 1 through the analysis of the training process and weight change trend. Table 8 shows the comparison results for different loss combinations. Smaller values of all these indicators represent that the method has better performance in the current estimation of tasks.
In Table 8, MTL is used to train the model by setting the loss of two tasks to 1, and MTL_dwa is used to train the model by dynamic weight calculation according to DWA. It can be seen that the weight obtained by dynamic weight calculation has a certain lag. It may not meet the current model’s learning speed requirements for weights. After the two weights are set to 1, better estimation results are obtained during the simulated online test. From the comparison of the results, we can see that the improvement is not significant. The DWA and fixed weights do not have a significant impact on what is studied in this paper. This also demonstrates that both tasks are of equal importance and influence on MTL. Figure 5 and Figure 6 display a comparison error between prediction and label values.
The red curves in the two figures are the label values of SOC and SOE at the corresponding times. The green and blue in the two figures are the predicted values of SOC and SOE at the corresponding times. The horizontal axis is the number of samples. The vertical axis in Figure 5 is the percentage of SOC at the current time. The vertical axis in Figure 6 is the percentage of SOE at the current time. As can be seen from the figure, the state prediction curve and label value curve almost completely coincide. There is only a slight error in the final moment of the partial discharge cycle.

4.5. Experiment 4: Model Generalization Performance Test

To prove the validity of the MTL model, a generalization performance test is performed on the LG battery dataset. This paper makes a comparison with the multi-tasking learning model, which has been widely used in current research. We also make a comparison with the corresponding single-tasking learning model. Table 9 shows the specific comparison results. Smaller values of all these indicators represent better performance of the method in the current estimation.
The relevant models in the table have been described in the previous section. As can be seen from the table, MTL can also achieve a good estimation effect on the LG battery dataset. For the SOE estimation task, MAE and RMSE values are 0.5855% and 0.8671%, respectively, and for the SOC estimation task, MAE and RMSE values are 0.5267% and 0.7806%, respectively. The estimated results are better than the single-task learning model and other multi-task learning models. This indicates that the MTL model has universal applicability. The error comparison between the predicted value and the label value of the MTL model in the process of simulating online prediction is shown as follows.
The red curves in the two figures are the label values of SOC and SOE at the corresponding times. The green and blue in the two figures are the predicted values of SOC and SOE at the corresponding times. The horizontal axis is the number of samples. The vertical axis in Figure 7 is the percentage of SOC at the current time. The vertical axis in Figure 8 is the percentage of SOE at the current time. As can be seen from the figure, the state prediction curves overlap almost exactly with the label value curves when predictions are made on the LG dataset. There is only a small error at 40% of the value of the partial discharge cycle. It also shows that MTL can be used well on the LG dataset.
We compare the results with those of other articles using the LG dataset. Data from the literature [27] were tested using UDDS operating conditions at three temperatures: 0 °C, 10 °C, and 25 °C. The relevant results are shown in Table 10.
In the literature [27], SOC and SOE were estimated for UDDS conditions at three temperatures, with the best results at 25 °C. For the SOC estimation task, the MAE and RMSE values are 0.63% and 0.82%, respectively. For the SOE estimation task, the MAE and RMSE values are 0.64% and 0.85%, respectively. The MTL model in this paper has MAE and RMSE values of 0.5267% and 0.7806%, respectively, for the SOC estimation task. For the SOE estimation task, the MAE and RMSE values are 0.5855% and 0.8671%, respectively. The average test results of our model for multi-temperature and multi-drive conditions outperformed its optimum results for multiple degrees Celsius.

5. Conclusions

In this paper, we propose a model for online multi-state estimation based on multi-task learning. The multi-layer extraction structure of MTL extracts time-series features from time-series data, divides the separated expert layer into specific and shared experts to extract task-specific features and shared features between different tasks, and obtains the final task features by weighting the combination of the two features through a gate structure. We simulate offline training and online prediction scenarios to demonstrate the effectiveness of the MTL proposed in this paper. The paper is also compared with several other multi-task learning models, single-task models, and different weight combination approaches. The MAE for the two tasks are 1.01% and 0.59%, respectively, and the RMSE are 1.29% and 0.77%, both outperforming the other multi-task learning models. With the learning of shared knowledge between different tasks, higher accuracy than the single-state estimation model is achieved. In comparison with different weight combination approaches, the comparison reveals that both tasks are of equal importance and impact on the model in the MTL, and the weight values for both tasks are chosen to be set by analysis of the training process. The trained model is deployed in practice, reducing the consumption of computational resources and obtaining higher estimation accuracy in almost the same processing time as the single-state estimation model. The generalizability of MTL is also demonstrated after generalization performance tests on other datasets.

6. Discussion

This research effectively solves the problem of how to balance the on-board computing conditions with the joint estimation of multiple states by the battery, expanding the idea of applying deep learning to achieve high accuracy estimation of the battery. With the continued development of the electric vehicle industry and the discovery of potential applications of biomass in batteries, lithium-ion batteries will be more widely used in the future. Deep learning will also be further applied in electric vehicle control systems, as it can achieve fast and accurate battery state estimation with a large amount of data training. In a real BMS, the battery operating conditions are more complex and the battery state to be estimated and predicted is not limited to the state of charge and energy. MTL is still a deep learning model that requires a large amount of data for training. In practical scenarios, the model may not be well trained for situations such as insufficient data or undersampling where a full discharge cycle is not available. To address the limitations of the model in situations such as insufficient data volume or the inability to obtain undersampling of complete discharge cycles, our future work hopes to introduce techniques related to migration learning domain adaptation to try to solve the problem of small samples in real-world scenarios. By investigating techniques such as fine-tuning and adversarial domain adaptation, we will explore multi-task learning methods for estimating states, including battery SOC, SOE, and fault, under different materials and different brands under multiple operating conditions.

Author Contributions

Investigation, X.B. and Y.L.; writing—original draft preparation, X.B.; writing—review and editing, Y.L., B.L., H.L. and Y.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Foundation of Inner Mongolia Autonomous Region, China grant number 2022MS06008.

Informed Consent Statement

Not applicable.

Data Availability Statement

Publicly available datasets were analyzed in this study. The Panasonic dataset can be found here: https://data.mendeley.com/datasets/wykht8y7tg (accessed on 21 June 2018). The LG dataset can be found here: https://data.mendeley.com/datasets/b5mj79w5w9 (accessed on 30 January 2020).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bouabidi, Z.; Almomani, F.; Al-musleh, E.I.; Katebah, M.A.; Hussein, M.M.; Shazed, A.R.; Karimi, I.A.; Alfadala, H. Study on boil-off gas (BOG) minimization and recovery strategies from actual baseload LNG export terminal: Towards sustainable LNG chains. Energies 2021, 14, 3478. [Google Scholar] [CrossRef]
  2. Lipu, M.S.H.; Hannan, M.A.; Hussain, A.; Ayob, A.; Saad, M.H.; Karim, T.F.; How, D.N. Data-driven state of charge estimation of lithium-ion batteries: Algorithms, implementation factors, limitations and future trends. J. Clean. Prod. 2020, 277, 124110. [Google Scholar] [CrossRef]
  3. Santoso, B.; Ammarullah, M.I.; Haryati, S.; Sofijan, A.; Bustan, M.D. Power and Energy Optimization of Carbon Based Lithium-Ion Battery from Water Spinach (Ipomoea aquatica). J. Ecol. Eng. 2023, 24, 213–223. [Google Scholar] [CrossRef]
  4. Hannan, M.A.; Lipu, M.S.H.; Hussain, A.; Mohamed, A. A review of lithium-ion battery state of charge estimation and management system in electric vehicle applications: Challenges and recommendations. Renew. Sustain. Energy Rev. 2017, 78, 834–854. [Google Scholar] [CrossRef]
  5. Wang, Y.; Tian, J.; Sun, Z.; Wang, L.; Xu, R.; Li, M.; Chen, Z. A comprehensive review of battery modeling and state estimation approaches for advanced battery management systems. Renew. Sustain. Energy Rev. 2020, 131, 110015. [Google Scholar] [CrossRef]
  6. Mamadou, K.; Lemaire, E.; Delaille, A.; Riu, D.; Hing, S.; Bultel, Y. A Definition of a State-of-Energy Indicator (SOE) for Electrochemical Storage Devices: Application for Energetic Availability Forecasting. J. Electrochem. Soc. 2012, 159, A1298–A1307. [Google Scholar] [CrossRef]
  7. Jung, H.; Silva, R.; Han, M. Scaling trends of electric vehicle performance: Driving range, fuel economy, peak power output, and temperature effect. World Electr. Veh. J. 2018, 9, 46. [Google Scholar] [CrossRef] [Green Version]
  8. Feng, F.; Lu, R.; Zhu, C. A combined state of charge estimation method for lithium-ion batteries used in a wide ambient temperature range. Energies 2014, 7, 3004–3032. [Google Scholar] [CrossRef] [Green Version]
  9. Zhang, R.; Xia, B.; Li, B.; Cao, L.; Lai, Y.; Zheng, W.; Wang, H.; Wang, W.; Wang, M. A study on the open circuit voltage and state of charge characterization of high capacity lithium-ion battery under different temperature. Energies 2018, 11, 2408. [Google Scholar] [CrossRef] [Green Version]
  10. Chen, P.; Lu, C.; Mao, Z.; Li, B.; Wang, C.; Tian, W.; Li, M.; Xu, Y. Evaluation of Various Offline and Online ECM Parameter Identification Methods of Lithium-Ion Batteries in Underwater Vehicles. ACS Omega 2022, 7, 30504–30518. [Google Scholar] [CrossRef]
  11. Xu, Z.; Wang, J.; Lund, P.D.; Zhang, Y. Co-estimating the state of charge and health of lithium batteries through combining a minimalist electrochemical model and an equivalent circuit model. Energy 2022, 240, 122815. [Google Scholar] [CrossRef]
  12. Zhang, Z.W.; Guo, T.Z.; Gao, M.Y.; He, Z.W.; Dong, Z.K. Review of estimation methods of state of charge of lithium-ion batteries for electric vehicles. J. Electron. Inf. 2021, 43, 1803–1815. [Google Scholar]
  13. Wei, X.; Jun, C.; Yu, G.; Jiachen, M.; Jiaqing, C. Unscented Particle Filter Based State of Energy Estimation for LiFePO4 Batteries Using an Online Updated Model. Int. J. Automot. Technol. 2022, 23, 503–510. [Google Scholar] [CrossRef]
  14. Karimi, D.; Behi, H.; Van Mierlo, J.; Berecibar, M. Equivalent Circuit Model for High-Power Lithium-Ion Batteries under High Current Rates, Wide Temperature Range, and Various State of Charges. Batteries 2023, 9, 101. [Google Scholar] [CrossRef]
  15. Yang, F.; Song, X.; Xu, F.; Tsui, K.L. State-of-charge estimation of lithium-ion batteries via long short-term memory network. IEEE Access 2019, 7, 53792–53799. [Google Scholar] [CrossRef]
  16. Liu, S.; Cui, N.; Zhang, C. An adaptive square root unscented Kalman filter approach for state of charge estimation of lithium-ion batteries. Energies 2017, 10, 1345. [Google Scholar] [CrossRef] [Green Version]
  17. Bian, C.; He, H.; Yang, S.; Huang, T. State-of-charge sequence estimation of lithium-ion battery based on bidirectional long short-term memory encoder-decoder architecture. J. Power Sources 2020, 449, 227558. [Google Scholar] [CrossRef]
  18. Wang, Y.C.; Shao, N.C.; Chen, G.W.; Hsu, W.S.; Wu, S.C. State-of-charge estimation for lithium-ion batteries using residual convolutional neural networks. Sensors 2022, 22, 6303. [Google Scholar] [CrossRef] [PubMed]
  19. Yang, K.; Tang, Y.; Zhang, S.; Zhang, Z. A deep learning approach to state of charge estimation of lithium-ion batteries based on dual-stage attention mechanism. Energy 2022, 244, 123233. [Google Scholar] [CrossRef]
  20. Liu, X.; Wu, J.; Zhang, C.; Chen, Z. A method for state of energy estimation of lithium-ion batteries at dynamic currents and temperatures. J. Power Sources 2014, 270, 151–157. [Google Scholar] [CrossRef]
  21. Wang, Y.; Yang, D.; Zhang, X.; Chen, Z. Probability based remaining capacity estimation using data-driven and neural network model. J. Power Sources 2016, 315, 199–208. [Google Scholar] [CrossRef]
  22. Liu, W.L.; Wang, L.F.; Wang, L.Y. SOE estimation for lithium-ion batteries based on condition recognition and prediction for electric vehicles. Trans. Electron. Soc. 2018, 33, 17–25. [Google Scholar]
  23. Ren, D.; Lu, L.; Shen, P.; Feng, X.; Han, X.; Ouyang, M. Battery remaining discharge energy estimation based on prediction of future operating conditions. J. Energy Storage 2019, 25, 100836. [Google Scholar] [CrossRef]
  24. He, H.; Qin, H.; Sun, X.; Shui, Y. Comparison study on the battery SoC estimation with EKF and UKF algorithms. Energies 2013, 6, 5088–5100. [Google Scholar] [CrossRef] [Green Version]
  25. Yang, X.; Wang, S.; Xu, W.; Qiao, J.; Yu, C.; Takyi-Aninakwa, P.; Jin, S. A novel fuzzy adaptive cubature Kalman filtering method for the state of charge and state of energy co-estimation of lithium-ion batteries. Electrochim. Acta 2022, 415, 140241. [Google Scholar] [CrossRef]
  26. Shrivastava, P.; Soon, T.K.; Idris, M.Y.I.B.; Mekhilef, S.; Adnan, S.B. Combined state of charge and state of energy estimation of lithium-ion battery using dual forgetting factor-based adaptive extended Kalman filter for electric vehicle applications. IEEE Trans. Veh. Technol. 2021, 70, 1200–1215. [Google Scholar] [CrossRef]
  27. Ma, L.; Hu, C.; Cheng, F. State of Charge and State of Energy Estimation for Lithium-Ion Batteries Based on a Long Short-Term Memory Neural Network. J. Energy Storage 2021, 37, 102440. [Google Scholar] [CrossRef]
  28. Zhang, S.; Zhang, X. A novel non-experiment-based reconstruction method for the relationship between open-circuit-voltage and state-of-charge/state-of-energy of lithium-ion battery. Electrochim. Acta 2022, 403, 139637. [Google Scholar] [CrossRef]
  29. Shrivastava, P.; Soon, T.K.; Idris, M.Y.I.B.; Mekhlief, S. Combined SOC and SOE Estimation of Lithium-ion battery for Electric Vehicle Applications. In Proceedings of the 2020 IEEE Energy Conversion Congress and Exposition (ECCE), Detroit, MI, USA, 11–15 October 2020; pp. 5614–5619. [Google Scholar]
  30. Xia, L.; Wang, S.; Yu, C.; Fan, Y.; Li, B.; Xie, Y. Joint estimation of the state-of-energy and state-of-charge of lithium-ion batteries under a wide temperature range based on the fusion modeling and online parameter prediction. J. Energy Storage 2022, 52, 105010. [Google Scholar] [CrossRef]
  31. Shrivastava, P.; Soon, T.K.; Idris, M.Y.I.B.; Mekhilef, S.; Adnan, S.B. Comprehensive co-estimation of lithium-ion battery state of charge, state of energy, state of power, maximum available capacity, and maximum available energy. J. Energy Storage 2022, 56, 106049. [Google Scholar] [CrossRef]
  32. Chen, L.; Wang, S.; Jiang, H.; Fernandez, C. A novel combined estimation method for state of energy and predicted maximum available energy based on fractional-order modeling. J. Energy Storage 2023, 62, 106930. [Google Scholar] [CrossRef]
  33. Li, X.; Long, T.; Tian, J.; Tian, Y. Multi-state joint estimation for a lithium-ion hybrid capacitor over a wide temperature range. J. Power Sources 2020, 479, 228677. [Google Scholar] [CrossRef]
  34. Zhang, X.; Wang, Y.; Wu, J.; Chen, Z. A novel method for lithium-ion battery state of energy and state of power estimation based on multi-time-scale filter. Appl. Energy 2018, 216, 442–451. [Google Scholar] [CrossRef]
  35. Liu, F.; Yu, D.; Su, W.; Bu, F. Multi-state joint estimation of series battery pack based on multi-model fusion. Electrochim. Acta 2023, 443, 141964. [Google Scholar] [CrossRef]
  36. Ma, J.; Zhao, Z.; Yi, X.; Chen, J.; Hong, L.; Chi, E.H. Modeling Task Relationships in Multi-task Learning with Multi-gate Mixture-of-Experts. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, London, UK, 19–23 August 2018; pp. 1930–1939. [Google Scholar]
  37. Huang, J.; Zhou, K.; Xiong, A.; Li, D. Smart contract vulnerability detection model based on multi-task learning. Sensors 2022, 22, 1829. [Google Scholar] [CrossRef]
  38. Xie, Z.; Chen, J.; Feng, Y.; Zhang, K.; Zhou, Z. End to end multi-task learning with attention for multi-objective fault diagnosis under small sample. J. Manuf. Syst. 2022, 62, 301–316. [Google Scholar] [CrossRef]
  39. Wu, K.; Gu, J.; Meng, L.; Wen, H.; Ma, J. An explainable framework for load forecasting of a regional integrated energy system based on coupled features and multi-task learning. Prot. Control. Mod. Power Syst. 2022, 7, 24. [Google Scholar] [CrossRef]
  40. Foggia, P.; Greco, A.; Saggese, A.; Vento, M. Multi-task learning on the edge for effective gender, age, ethnicity and emotion recognition. Eng. Appl. Artif. Intell. 2023, 118, 105651. [Google Scholar] [CrossRef]
  41. Peng, W.; Tang, Q.; Dai, W.; Chen, T. Improving cancer driver gene identification using multi-task learning on graph convolutional network. Briefings Bioinform. 2022, 23, bbab432. [Google Scholar] [CrossRef]
  42. Aung, T.W.W.; Wan, Y.; Huo, H.; Siu, Y. Multi-triage: A multi-task learning framework for bug triage. J. Syst. Softw. 2022, 184, 111133. [Google Scholar] [CrossRef]
  43. Zhou, Y.; Chen, H.; Li, Y.; Liu, Q.; Xu, X.; Wang, S.; Yap, P.T.; Shen, D. Multi-task learning for segmentation and classification of tumors in 3D automated breast ultrasound images. Med. Image Anal. 2021, 70, 101918. [Google Scholar] [CrossRef] [PubMed]
  44. Ishihara, K.; Kanervisto, A.; Miura, J.; Hautamaki, V. Multi-task learning with attention for end-to-end autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 2902–2911. [Google Scholar]
  45. Kollmeyer, P. Panasonic 18650pf Li-Ion Battery Data. Mendeley Data 2018. p. 1. Available online: https://data.mendeley.com/datasets/wykht8y7tg (accessed on 21 June 2018).
  46. Kollmeyer, P. LG 18650HG2 Li-ion Battery Data. Mendeley Data 2020. Available online: https://data.mendeley.com/datasets/b5mj79w5w9 (accessed on 30 January 2020).
  47. Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef] [PubMed]
  48. Liu, S.; Johns, E.; Davison, A.J. End-to-End multi-task learning with attention. In Proceedings of the 2019IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 1871–1880. [Google Scholar]
Figure 1. (a) Shared-bottom model. (b) One-gate MOE model. (c) Multi-gate MOE model [36].
Figure 1. (a) Shared-bottom model. (b) One-gate MOE model. (c) Multi-gate MOE model [36].
Energies 16 03002 g001
Figure 2. Multi-task learning network.
Figure 2. Multi-task learning network.
Energies 16 03002 g002
Figure 3. Experimental procedure.
Figure 3. Experimental procedure.
Energies 16 03002 g003
Figure 4. The change of loss weight of two tasks dynamically calculated by DWA during training.
Figure 4. The change of loss weight of two tasks dynamically calculated by DWA during training.
Energies 16 03002 g004
Figure 5. Error comparison of SOC predicted value and label value.
Figure 5. Error comparison of SOC predicted value and label value.
Energies 16 03002 g005
Figure 6. Error comparison of SOE predicted value and label value.
Figure 6. Error comparison of SOE predicted value and label value.
Energies 16 03002 g006
Figure 7. Error comparison of SOC predicted value and label value.
Figure 7. Error comparison of SOC predicted value and label value.
Energies 16 03002 g007
Figure 8. Error comparison of SOE predicted value and label value.
Figure 8. Error comparison of SOE predicted value and label value.
Energies 16 03002 g008
Table 1. Dataset.
Table 1. Dataset.
BatteryData TypeTemperature
Panasonic 18650PFcycle(1–4), UDDS, LA92, US06, HWFET, NN0 °C, 10 °C, 25 °C
LG 18650HG2Mixed(1–8)0 °C, 10 °C, 25 °C
Table 2. Cell Parameters (Panasonic).
Table 2. Cell Parameters (Panasonic).
Nominal Open Circuit Voltage3.6 V
CapacityMin. 2.75 Ah/Typ. 2.9 Ah
Min/Max Voltage2.5 V/4.2 V
Mass/Energy Storage48 g/9.9 Wh
Minimum Charging Temperature10 ℃
Cycles to 80% Capacity500 (100% DOD, 25 ℃)
Table 3. Cell Parameters (LG).
Table 3. Cell Parameters (LG).
ChemistryLi[NiMnCo]O2 (H-NMC)/Graphite + SiO
Nominal Voltage3.6 V
Nominal Capacity3.0 Ah
Energy Density240 (Wh/Kg)
Table 4. MTL model hyperparameter.
Table 4. MTL model hyperparameter.
StructureParametersValue
LSTMlayers2
Hidden size64
Expertslayers1
Hidden size50
GatesFC layers1
Hidden size50
TowerFC layers1
Hidden size32
Table 5. Compare the results with other multi-task learning models.
Table 5. Compare the results with other multi-task learning models.
SOESOCTime (ms)
MAE (%)RMSE (%)MAE (%)RMSE (%)
MTL1.01281.28980.59430.77094.93
CNN_atten1.83122.22760.99251.34868.43
Hard_share1.08271.37510.68340.88364.34
MMOE1.16191.47600.74610.99421.51
CGC1.11131.42810.70380.89921.49
Table 6. Comparison of results with the literature [28] on the Panasonic dataset.
Table 6. Comparison of results with the literature [28] on the Panasonic dataset.
SOESOC
MAE (%)RMSE (%)MAE (%)RMSE (%)
MTL1.01281.28980.59430.7709
literature0.831.011.061.35
Table 7. Compare the results with the single task model.
Table 7. Compare the results with the single task model.
SOESOCTime (ms)
MAE (%)RMSE (%)MAE (%)RMSE (%)
MTL1.01281.28980.59430.77094.93
Single SOC0.64560.84844.64
Single SOE1.10891.37664.62
Table 8. Comparison results of different loss combinations.
Table 8. Comparison results of different loss combinations.
SOESOC
MAE (%)RMSE (%)MAE (%)RMSE (%)
MTL1.01281.28980.59430.7709
MTL_dwa1.01711.67710.61340.7957
Table 9. Generalize performance test results.
Table 9. Generalize performance test results.
SOESOC
MAE (%)RMSE (%)MAE (%)RMSE (%)
MTL0.58550.86710.52670.7806
Hard_share0.61490.92260.68390.9619
MMOE0.82051.13040.83031.0715
CGC0.80211.07080.77491.0584
Single SOC0.57690.8479
Single SOE0.64790.9928
Table 10. Comparison of results with literature [27] on the LG dataset.
Table 10. Comparison of results with literature [27] on the LG dataset.
Temperature (°C)LiteratureMTL
MAE (%)/RMSE (%)MAE (%)/RMSE (%)
SOC02.00/2.40
101.71/2.060.5267/0.7806
250.63/0.82
SOE01.85/2.21
101.56/1.940.5855/0.8671
250.64/0.85
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bao, X.; Liu, Y.; Liu, B.; Liu, H.; Wang, Y. Multi-State Online Estimation of Lithium-Ion Batteries Based on Multi-Task Learning. Energies 2023, 16, 3002. https://doi.org/10.3390/en16073002

AMA Style

Bao X, Liu Y, Liu B, Liu H, Wang Y. Multi-State Online Estimation of Lithium-Ion Batteries Based on Multi-Task Learning. Energies. 2023; 16(7):3002. https://doi.org/10.3390/en16073002

Chicago/Turabian Style

Bao, Xiang, Yuefeng Liu, Bo Liu, Haofeng Liu, and Yue Wang. 2023. "Multi-State Online Estimation of Lithium-Ion Batteries Based on Multi-Task Learning" Energies 16, no. 7: 3002. https://doi.org/10.3390/en16073002

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop