Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
1
result(s) for
"Tripura, Sajib"
Sort by:
Charging stations demand forecasting using LSTM based hybrid transformer model
by
Tripura, Sajib
,
Hussain, Adil
,
Aslam, Ayesha
in
639/4077/4073/4071
,
639/705/1042
,
639/705/117
2025
Accurate forecasting of energy demand for electric vehicles (EVs) is crucial for maintaining the stability and reliability of power systems. By predicting demand over various periods, charging station owners can ensure a continuous energy supply. Medium-term and long-term demand predictions, which extend from a few weeks to several days, help analyze charging demand across different periods based on historical trends. This study proposes a Transformer model that utilizes an LSTM-based encoder-decoder for forecasting the demand at electric vehicle charging stations (EVCS). The proposed model is compared with traditional deep learning-based LSTM and Transformer models. The research employs open datasets from ACN, including charging data from Caltech and JPL. Both datasets are used to train and test the models. Predictions are made for 30 days, 120 days, and 240 days ahead, with results compared to actual demand. Performance is evaluated using Mean Absolute Error (MAE) and Mean Squared Error (MSE). Compared to baseline models, the proposed LSTM-Transformer model for Caltech data shows a significant improvement at the 30-day horizon, lowering MAE by up to 17.27% and MSE by 19.79%. The accuracy improvement are minor but consistent in longer horizons (120 and 240 days), with an MAE and MSE improvement of up to 5.71% and 4.85%, respectively. The LSTM-Transformer model also shows better accuracy across all horizons for JPL data by reducing MAE and MSE by up to 24.91% and 23.17% at 30 days, 5.00% and 5.17% at 120 days, and 3.90% and 4.86% at 240 days. The results indicate that the Hybrid Transformer model outperforms the baseline models for both datasets in medium-term and long-term predictions. The 30, 120, and 240-day predictions demonstrate lower error rates with the proposed model when utilizing Caltech and JPL charging data for these time frames.
Journal Article