ID der Einreichung:



Configuration and Sizing of Machine Learning Algorithms for Battery SOC, SOH, and Temperature Estimation
Diagnostics & battery management

While there is increasing interest in neural network machine learning algorithms for battery state estimation and modeling [1], the process of determining what type of network to use and how to size and configure the networks remains difficult. Battery applications typically utilize networks which fall into a few broad categories, including (1) non-recurrent networks with no memory of past time steps, (2) recurrent networks with inherent memory of past time steps, and (3) networks which take a large matrix of data as an input and have a single output, such as convolutional networks for state of health (SOH) estimation. For these network types there are many considerations around the input features, data filtering, data augmentation, amount of training data, number of layers, number of learnable parameters, computational complexity, training time, and other aspects. To give insight on how to best design a neural network for a battery application, the results of three case studies on battery SOC, SOH, and temperature estimation will be presented.

For the battery SOC estimation case study, a non-recurrent, feedforward neural network (FNN) and a recurrent, long short-term memory (LSTM) neural network are trained and tested with automotive drive cycle data for two different cells, with temperature ranging from -10 ⁰C to 25 ⁰C [2]. Adding layers to the FNN is shown to have a minimal effect on accuracy (Figure 1(a) in the supplemental document), but results in a significant reduction in execution time on a microprocessor, with a 30% reduction when going from 1 to 2 layers (Figure 1(f)). Increasing the number of learnable parameters beyond 600 is found to result in little accuracy improvement for both network types (Figure 1(b)). Additionally it is observed that the LSTM and FNN perform similarly at low temperature (Figure 1(c)), the LSTM reaches a correct SOC estimation value more quickly than the FNN when starting from an uninitialized state (Figure 1(d)), the FNN has a less noisy estimate of SOC (Figure 1(e)), for an equal number of learnable parameters the FNN has 1/3 the microprocessor execution time of the LSTM (Figure 1(f)), training must be repeated multiple times with random initial conditions to reach the best accuracy (Figure 1(g)), and the FNN takes around 60 to 70% less training time per epoch than the LSTM (Figure 1(h)). The best configuration for both network types was found to achieve an average error across temperatures and drive cycles of around 0.8%, and many of the conclusions from this study can be applied generally to shorten the time to develop and train new SOC estimation algorithms.

To give insight into another application type, a convolutional neural network, as shown in Figure 2(a), was utilized to estimate battery state of health from the measured voltage, current, and temperature during charging [3]. Error decreases significantly as the number of layers is increased from 1 to 6 (Figure 2(b)). Increasing the number of cells used for training the network is beneficial for up to 13 cells, beyond which accuracy does not improve significantly (Figure 2(b)). SOH estimation error can be further reduced by augmenting the data through the addition of white noise, which helps prevent over fitting and increase robustness to sensor error and noise (Figure 2(c)).

In the final case study, the temperature of a lithium-ion cell is estimated based on the measured voltage, current, ambient temperature, and state of charge [4]. The study focuses on feature selection, the process of selecting the neural network inputs. Inputs including unfiltered and filtered voltage and current, SOC, and ambient temperature are compared via eight different cases as shown in Figure 3(a). All inputs are shown to be beneficial, as the FNN with all input types has the least error of the FNNs, while the LSTM including ambient temperature has the least error overall (Figure 3(b)). The LSTM is shown in Figure 3(d) to have a considerably less noisy estimation of temperature, which is opposite of the observation for SOC estimation, showing that some network characteristics are application specific. A study was also performed on the low-pass frequency for the filters applied to the measured voltage and current for the FNN, which have the effect of giving the FNN memory of the past. Frequency was swept from 0.01 mHz to 100 mHz as shown in Figure 3(e), and a frequency of 1 mHz was found to result in the lowest temperature estimation error, indicating this frequency best captures the electrochemical and thermal dynamics of the battery cell.

A series of recommendations and best practices can be drawn from these cases studies, helping practitioners to design and apply neural networks for their battery applications more quickly and effectively.

[1] C. Vidal, P. Malysz, P.J. Kollmeyer and A. Emadi, „Machine Learning Applied to Electrified Vehicle Battery State of Charge and State of Health Estimation: State-of-the-Art,“ IEEE Access, vol. 8, pp. 52796-52814, 2020.

[2] C. Vidal, P. Malysz, M. Naguib, A. Emadi, P.J. Kollmeyer, “Estimating battery state of charge using recurrent and non-recurrent neural networks,” Journal of Energy Storage 2021, accepted with minor modifications.

[3] E. Chemali, P. J. Kollmeyer, M. Preindl, and A. Emadi, „A Deep Learning Approach for State-of-Health Estimation of Li-ion Batteries“, draft prepared for journal submission.

[4] M. Naguib, P.J. Kollmeyer, C. Vidal, A. Emadi, “Accurate Surface Temperature Estimation of Lithium-Ion Batteries Using Feedforward and Recurrent Artificial Neural Network Models,” IEEE Transportation Electrification Conference and Expo (ITEC), Virtual Conference, 2021.

Downloads (optional)

Hinweis: Möglicherweise sind nicht alle Download-Felder mit Dokumenten hinterlegt.




Carlos Vidal, Ephrem Chemali, Mina Naguib, Ali Emadi

ist eröffnet

Wir freuen uns auf Ihre Einreichung
bis zum 31. Oktober 2022