Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Deep Learning Approach for Equivalent Circuit Model Parameter Identification of Lithium-Ion Batteries
by
Liu, Yi-Hua
, Khanh, Dat Nguyen
, Ho, Kun-Che
, Wang, Shun-Chung
, Hsueh, Yu-Fang
in
Accuracy
/ Analysis
/ Artificial neural networks
/ Batteries
/ Computational linguistics
/ Datasets
/ Deep learning
/ Efficiency
/ Electric vehicles
/ Equivalent circuits
/ Factorial design
/ Heuristic
/ Hypercubes
/ Kalman filters
/ Language processing
/ Latin hypercube sampling
/ Lithium
/ Lithium-ion batteries
/ Machine learning
/ Natural language interfaces
/ Neural networks
/ Optimization algorithms
/ Parameter estimation
/ Parameter identification
/ Recurrent neural networks
2025
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Deep Learning Approach for Equivalent Circuit Model Parameter Identification of Lithium-Ion Batteries
by
Liu, Yi-Hua
, Khanh, Dat Nguyen
, Ho, Kun-Che
, Wang, Shun-Chung
, Hsueh, Yu-Fang
in
Accuracy
/ Analysis
/ Artificial neural networks
/ Batteries
/ Computational linguistics
/ Datasets
/ Deep learning
/ Efficiency
/ Electric vehicles
/ Equivalent circuits
/ Factorial design
/ Heuristic
/ Hypercubes
/ Kalman filters
/ Language processing
/ Latin hypercube sampling
/ Lithium
/ Lithium-ion batteries
/ Machine learning
/ Natural language interfaces
/ Neural networks
/ Optimization algorithms
/ Parameter estimation
/ Parameter identification
/ Recurrent neural networks
2025
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Deep Learning Approach for Equivalent Circuit Model Parameter Identification of Lithium-Ion Batteries
by
Liu, Yi-Hua
, Khanh, Dat Nguyen
, Ho, Kun-Che
, Wang, Shun-Chung
, Hsueh, Yu-Fang
in
Accuracy
/ Analysis
/ Artificial neural networks
/ Batteries
/ Computational linguistics
/ Datasets
/ Deep learning
/ Efficiency
/ Electric vehicles
/ Equivalent circuits
/ Factorial design
/ Heuristic
/ Hypercubes
/ Kalman filters
/ Language processing
/ Latin hypercube sampling
/ Lithium
/ Lithium-ion batteries
/ Machine learning
/ Natural language interfaces
/ Neural networks
/ Optimization algorithms
/ Parameter estimation
/ Parameter identification
/ Recurrent neural networks
2025
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Deep Learning Approach for Equivalent Circuit Model Parameter Identification of Lithium-Ion Batteries
Journal Article
Deep Learning Approach for Equivalent Circuit Model Parameter Identification of Lithium-Ion Batteries
2025
Request Book From Autostore
and Choose the Collection Method
Overview
This study proposes a deep learning (DL)-based method for identifying the parameters of equivalent circuit models (ECMs) for lithium-ion batteries using time-series voltage response data from current pulse charge–discharge experiments. The application of DL techniques to this task is presented for the first time. The best-performing baseline model among the recurrent neural network, long short-term memory, and gated recurrent unit achieved a mean absolute percentage error (MAPE) of 0.52073 across the five parameters. Furthermore, more advanced models, including a one-dimensional convolutional neural network (1DCNN) and temporal convolutional networks, were developed using full factorial design (FFD), resulting in substantial MAPE improvements of 37.8% and 30.4%, respectively. The effectiveness of Latin hypercube sampling (LHS) for training data generation was also investigated, showing that it achieved comparable or better performance than FFD with only two-thirds of the training samples. Specifically, the 1DCNN model with LHS sampling achieved the best overall performance, with an average MAPE of 0.237409. These results highlight the potential of DL models combined with efficient sampling strategies.
This website uses cookies to ensure you get the best experience on our website.