Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
4
result(s) for
"dual‐stage attention mechanism"
Sort by:
A chaotic time series combined prediction model for improving trend lagging
by
Chen, Lizhi
,
Zheng, Yuanfang
,
Feng, Yongxin
in
chaos
,
chaotic time series prediction
,
combined prediction model
2024
Chaotic time series prediction is a prediction method based on chaos theory, and has important theoretical and application value. At present, most prediction methods only pursue digital fitting and do not consider the directional trend. In addition, using the single model will not achieve better prediction results. Therefore, a chaotic time series combined prediction model for improving trend lagging (ITL) is proposed. An improved dual‐stage attention‐based long short‐term memory model with the improved training objective fuction is designed to solve the trend lagging problem. Then, an auto regressive moving average model with the sliding window is established to mine other characteristics of the time series except nonlinear characteristic. Finally, the idea of optimization algorithm is introduced to construct a time series combined prediction model with high accuracy based on the above two models, so as to perform the chaotic time series prediction from multiple perspectives. Multiple datasets are selected as experimental datasets, and the proposed method is compared with common prediction methods. The results show that the proposed method can achieve single‐step prediction with high accuracy and effectively improve the lagging of chaotic time series prediction. This research can provide theoretical support for the complex chaotic time series prediction.
In this paper, a time series combined prediction model for improving trend lagging is proposed. The improved dual‐stage attention‐based long short‐term memory model is designed. And the optimized training objective function is constructed to solve the problem that the prediction methods do not consider the directional trend. The idea of optimization algorithm is introduced to construct a time series combined prediction model with high accuracy, and the time series prediction is performed from multiple perspectives, so as to improve the generalization ability of the model.
Journal Article
A Short-Term Load Forecasting Model Based on Crisscross Grey Wolf Optimizer and Dual-Stage Attention Mechanism
2023
Accurate short-term load forecasting is of great significance to the safe and stable operation of power systems and the development of the power market. Most existing studies apply deep learning models to make predictions considering only one feature or temporal relationship in load time series. Therefore, to obtain an accurate and reliable prediction result, a hybrid prediction model combining a dual-stage attention mechanism (DA), crisscross grey wolf optimizer (CS-GWO) and bidirectional gated recurrent unit (BiGRU) is proposed in this paper. DA is introduced on the input side of the model to improve the sensitivity of the model to key features and information at key time points simultaneously. CS-GWO is formed by combining the horizontal and vertical crossover operators, to enhance the global search ability and the diversity of the population of GWO. Meanwhile, BiGRU is optimized by CS-GWO to accelerate the convergence of the model. Finally, a collected load dataset, four evaluation metrics and parametric and non-parametric testing manners are used to evaluate the proposed CS-GWO-DA-BiGRU short-term load prediction model. The experimental results show that the RMSE, MAE and SMAPE are reduced respectively by 3.86%, 1.37% and 0.30% of those of the second-best performing CSO-DA-BiGRU model, which demonstrates that the proposed model can better fit the load data and achieve better prediction results.
Journal Article
RAdam-DA-NLSTM: A Nested LSTM-Based Time Series Prediction Method for Human–Computer Intelligent Systems
2023
At present, time series prediction methods are widely applied for Human–Computer Intelligent Systems in various fields such as Finance, Meteorology, and Medicine. To enhance the accuracy and stability of the prediction model, this paper proposes a time series prediction method called RAdam-Dual stage Attention mechanism-Nested Long Short-Term Memory (RAdam-DA-NLSTM). First, we design a Nested LSTM (NLSTM), which adopts a new internal LSTM unit structure as the memory cell of LSTM to guide memory forgetting and memory selection. Then, we design a self-encoder network based on the Dual stage Attention mechanism (DA-NLSTM), which uses the NLSTM encoder based on the input attention mechanism, and uses the NLSTM decoder based on the time attention mechanism. Additionally, we adopt the RAdam optimizer to solve the objective function, which dynamically selects Adam and SGD optimizers according to the variance dispersion and constructs the rectifier term to fully express the adaptive momentum. Finally, we use multiple datasets, such as PM2.5 data set, stock data set, traffic data set, and biological signals, to analyze and test this method, and the experimental results show that RAdam-DA-NLSTM has higher prediction accuracy and stability compared with other traditional methods.
Journal Article
DA-RNN-Based Bus Arrival Time Prediction Model
2024
Accurate prediction of bus arrival time is crucial for constructing smart cities and intelligent transportation systems. Objectivity and clarity must be maintained throughout to ensure efficient operation. Therefore, it is essential to achieve precise bus arrival time prediction. A recurrent neural network prediction model employing a dual-stage attention mechanism is proposed. The model was constructed based on bidirectional long and short-term memory networks, and arrival time predictions incorporate both dynamic and static factors of bus travel. The model utilized an advanced seagull optimization algorithm to optimize the model parameters, enhanced model iteration and population richness by incorporating the sine-cosine operator and adaptive parameters, and ultimately validated model performance through simulation experiments. The experimental results showed that the prediction error of the benchmark model is 324s and that of the normal peak is 87s. Considering the dynamic and static factors, the prediction error of the model was 6s ~ 8s. The minimum values of mean absolute percentage error, root mean square error and mean absolute error of the model were 0.07, 11.28 and 9.22, respectively. The experimental results demonstrated that the minimum error of the model exhibits the highest prediction accuracy, substantiating the model’s potential for accurate prediction. Furthermore, the model’s performance is effectively safeguarded from the impact of peak time. In addition, the model is feasible in practical application.
Journal Article