Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
17 result(s) for "Diebold–Mariano test"
Sort by:
Forecaster's Dilemma: Extreme Events and Forecast Evaluation
In public discussions of the quality of forecasts, attention typically focuses on the predictive performance in cases of extreme events. However, the restriction of conventional forecast evaluation methods to subsets of extreme observations has unexpected and undesired effects, and is bound to discredit skillful forecasts when the signal-to-noise ratio in the data generating process is low. Conditioning on outcomes is incompatible with the theoretical assumptions of established forecast evaluation methods, thereby confronting forecasters with what we refer to as the forecaster's dilemma. For probabilistic forecasts, proper weighted scoring rules have been proposed as decision-theoretically justifiable alternatives for forecast evaluation with an emphasis on extreme events. Using theoretical arguments, simulation experiments and a real data study on probabilistic forecasts of U.S. inflation and gross domestic product (GDP) growth, we illustrate and discuss the forecaster's dilemma along with potential remedies.
Improved forecasting of carbon dioxide emissions using a hybrid SSA ARIMA model based on annual time series data in Bahrain
Forecasting carbon dioxide (CO₂) emissions has become crucial for attaining environmental sustainability, especially in Bahrain, which uses a lot of fossil fuels. Therefore, there is need for more accurate modeling tools that are suited to Bahrain emission pattern, particularly in light of the increasing environmental pressure and dearth of previous studies. Accordingly, we proposed a hybrid forecasting model that combines Singular Spectrum Analysis (SSA) and the Auto Regressive Integrated Moving Average (ARIMA) method. This hybrid model decomposes the annual CO₂ emissions time series into trend, periodic, and noise components using SSA, then applies ARIMA individually to each component. As a result, the study makes use of World Bank annual CO₂ emission data for three different time periods: 1990–2018, 2000–2018, and 2003–2018. The model performance was evaluated using standard error metrics—Mean Absolute Percentage Error (MAPE) and Root Mean Square Error (RMSE). To assess whether the observed improvements were statistically significant, the Diebold–Mariano (DM) test was also applied, a widely used method for comparing the predictive accuracy of competing models. In addition, forecast evaluation metrics such as Theil’s U-statistic and out-of-sample forecast plots with confidence intervals were also used to strengthen the assessment of the models. Results show that the SSA-ARIMA hybrid model significantly outperforms the conventional ARIMA model. For instance, during the 2014–2018 period, the hybrid model achieved lower MAPE values (1.12%, 0.91%, and 1.40%) compared to ARIMA (2.14%, 1.69%, and 1.41%) across the respective time frames. These results demonstrated the hybrid SSA-ARIMA model’s potential as a reliable tool for Bahrain’s emissions forecasting.
Predicting U.S. recessions through a combination of probability forecasts
Recently De Luca and Carfora (Statistica e Applicazioni 8:123–134, 2010 ) have proposed a novel model for binary time series, the Binomial Heterogenous Autoregressive (BHAR) model, successfully applied for the analysis of the quarterly binary time series of U.S. recessions. In this work we want to measure the efficacy of the out-of-sample forecasts of the BHAR model compared to the probit models by Kauppi and Saikkonen (Rev Econ Stat 90:777–791, 2008 ). Given the substantial indifference of the predictive accuracy between the BHAR and the probit models, a combination of forecasts using the method proposed by Bates and Granger (Oper Res Q 20:451–468, 1969 ) for probability forecasts is analyzed. We show how the forecasts obtained by the combination between the BHAR model and each of the probit models are superior compared to the forecasts obtained by each single model.
Bayesian model averaging based deep learning forecasts of inpatient bed occupancy in mental health facilities
Mental health disorders affect over 15% of the global working-age population, contributing to an annual economic loss of approximately USD 1 trillion due to diminished productivity and increased healthcare expenditures. In India, the post-pandemic surge in hospitalizations has placed additional strain on mental health infrastructure, exacerbating an already significant treatment gap. Overcrowding and inadequate forecasting mechanisms have resulted in occupancy rates that exceed hospital capacity, underscoring the urgent need for predictive tools to support admission planning and resource allocation. This study introduces a novel forecasting framework that applies Bayesian Model Averaging (BMA) with Zellner’s g-prior used here for the first time alongside deep learning models for predicting weekly bed occupancy at India’s second-largest mental health hospital. Time series data from 2008 to 2024 were used to train six models: Time Delay Neural Networks (TDNN), Recurrent Neural Networks (RNN), Gated Recurrent Units (GRU), Long Short-Term Memory (LSTM), Bidirectional LSTM (BiLSTM), and Bidirectional GRU (BiGRU). Model performance was optimized using random search (RS) and grid search (GS) hyperparameter tuning, allowing the framework to account for model uncertainty while improving predictive accuracy and consistency. Among all models, BiLSTM with GS tuning and BMA-GS model showed the best forecasting performance for bed-occupancy, achieving 98.06% accuracy (MAPE: 1.939%) and effectively capturing weekly fluctuations within ±13 beds. In contrast, RS-tuned models yielded higher errors (MAPE: 2.331%). Moreover, the average credible interval width decreased from 16.34 under BMA-RS to 13.28 with BMA-GS, indicating improved forecast precision and reliability. This study demonstrates that embedding Bayesian statistics specifically BMA with Zellner’s g-prior into deep learning architectures offers a robust and scalable solution for forecasting hospital bed occupancy. The proposed framework enhances predictive accuracy and reliability, supporting data-driven planning for hospital administrators and policymakers. It aligns with the objectives of India’s National Mental Health Programme (NMHP) and Sustainable Development Goal 3, advancing equitable and efficient access to mental healthcare.
Comparative Analysis of Machine Learning Techniques in Predicting Wind Power Generation: A Case Study of 2018–2021 Data from Guatemala
The accurate forecasting of wind power has become a crucial task in renewable energy due to its inherent variability and uncertainty. This study addresses the challenge of predicting wind power generation without meteorological data by utilizing machine learning (ML) techniques on data from 2018 to 2021 from three wind farms in Guatemala. Various machine learning models, including Gated Recurrent Unit (GRU) and Long Short-Term Memory (LSTM), Bidirectional Long Short-Term Memory (BiLSTM), Bagging, and Extreme Gradient Boosting (XGBoost), were evaluated to determine their effectiveness. The performance of these models was assessed using Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE) metrics. Time series cross-validation was employed to validate the models, with GRU, LSTM, and BiLSTM showing the lowest RMSE and MAE. Furthermore, the Diebold–Mariano (DM) test and Bayesian model comparison were used for pairwise comparisons, confirming the robustness and accuracy of the top-performing models. The results highlight the superior accuracy and robustness of advanced neural network architectures in capturing the complex temporal dependencies in wind power data, making them the most reliable models for precise forecasting. These findings provide critical insights for enhancing grid management and operational planning in the renewable energy sector.
Stock Market Index Prediction Using CEEMDAN‐LSTM‐BPNN‐Decomposition Ensemble Model
This study investigates the forecasting of the Deutscher Aktienindex (DAX) market index by addressing the nonlinear and nonstationary nature of financial time series data using the CEEMDAN decomposition method. The CEEMDAN technique is used to decompose the time series into intrinsic mode functions (IMFs) and residuals, which are classified into low‐frequency (LF), medium‐frequency (MF), and high‐frequency (HF) components. Long short‐term memory (LSTM) networks are applied to the MF and HF components, while the backpropagation neural network (BPNN) is utilized for the LF components, resulting in a robust hybrid model termed CEEMDAN‐LSTM‐BPNN. To evaluate the performance of the proposed model, we compare it against several benchmark models, including ARIMA, RNN, LSTM, GRU, BIGRU, BILSTM, BPNN, CEEMDAN‐LSTM, CEEMDAN‐GRU, CEEMDAN‐BPNN, and CEEMDAN‐GRU‐BPNN, across different training–testing splits (70% training/30% testing, 80% training/20% testing, and 90% training/10% testing). The model’s predictive accuracy is measured using six metrics: root mean squared error (RMSE), mean absolute error (MAE), mean absolute percentage error (MAPE), symmetric mean absolute percentage error (SMAPE), root mean squared logarithmic error (RMSLE), and R ‐squared. To further assess model performance, we conduct the Diebold–Mariano (DM) test to compare forecast accuracy between the proposed and benchmark models and the model confidence set (MCS) test to evaluate the statistical significance of the improvement. The results demonstrate that the CEEMDAN‐LSTM‐BPNN model significantly outperforms other methods in terms of accuracy, with the DM and MCS tests confirming the superiority of the proposed model across multiple evaluation metrics. The findings highlight the importance of combining advanced decomposition methods and deep learning models for financial forecasting. This research contributes to the development of more accurate forecasting techniques, offering valuable implications for financial decision‐making and risk management.
Forecasting container freight rates for major trade routes: a comparison of artificial neural networks and conventional models
Major players in maritime business such as shipping lines, charterers, shippers, and others rely on container freight rate forecasts for operational decision-making. The absence of a formal forward market in container shipping necessitates reliance on forecasts, also for hedging purposes. To identify better performing forecasting approaches, we compare three models, namely autoregressive integrated moving average (ARIMA), vector autoregressive (VAR) or vector error correction (VEC), and artificial neural network (ANN) models. We examine the China Containerized Freight Index (CCFI) as a collection of weekly freight rates published by the Shanghai Shipping Exchange (SSE) for four major trade routes. We find that, overall, VAR/VEC models outperform ARIMA and ANN in training-sample forecasts, but ARIMA outperforms VAR and ANN taking test-samples. At route level, we observe two exceptions to this. ARIMA performs better for the Far East to Mediterranean route, in the training-sample, and the VEC model does the same in the Far East to US East Coast route in the test-sample. Hence, we advise industry players to use ARIMA for forecasting container freight rates for major trade routes ex-China, except for VEC in the case of the Far East to US East Coast route.
A novel hybrid time-series approach for IoT-cloud-enabled environment monitoring
Air pollution is a growing concern in today’s urbanized world, necessitating efficient and accurate methods for air quality monitoring. The proliferation of IoT devices has led to a surge in the generation of time-series data. With its high volume and complexity, this surge in time-series data necessitates cloud-based solutions for handling and analyzing this data effectively. However, existing methods for air quality monitoring face challenges in capturing the complex patterns and dynamics of air pollution, which often exhibit both linear and nonlinear characteristics. Air pollution data often exhibit both linear and nonlinear characteristics. Linearity and nonlinearity refer to the nature of the relationships within the data. Some aspects of air quality, such as pollutant concentrations, may follow linear patterns, while other factors, like the interaction of multiple pollutants and environmental conditions, exhibit nonlinear relationships. This complexity arises from the multifaceted nature of air quality dynamics, which various factors and interactions can influence. To address these challenges, this study introduces a novel hybrid time-series approach that combines the proven strengths of two well-established techniques: traditional time-series autoregressive integrated moving average (ARIMA) and soft computing adaptive neuro-fuzzy inference system (ANFIS). The hybrid model is designed to provide a comprehensive solution that accommodates the diverse characteristics of air quality time-series data. To assess the efficacy of our proposed model, we conducted extensive experiments using real-world air pollution datasets obtained from the Ministry of Environment, Forest and Climate Change of India, covering the period from January 2015 to July 2020. Our evaluation includes a range of performance metrics such as root-mean-square error (RMSE), mean absolute error (MAE), mean absolute percentage error (MAPE), and mean squared logarithmic error (MSLE). Specifically, our model demonstrates exceptional accuracy, with notably low error values for key metrics such as air quality index (AQI) and PM2.5. Furthermore, we subjected our innovative hybrid model to rigorous statistical testing using the Diebold-Mariano test, establishing the significance and superiority of our approach. This research advances our understanding of air quality prediction and offers a valuable solution for mitigating the detrimental effects of air pollution on public health and the environment.
Forecasting Asset Returns Using Nelson–Siegel Factors Estimated from the US Yield Curve
This paper explores the hypothesis that the returns of asset classes can be predicted using common, systematic risk factors represented by the level, slope, and curvature of the US interest rate term structure. These are extracted using the Nelson–Siegel model, which effectively captures the three dimensions of the yield curve. To forecast the factors, we applied autoregressive (AR) and vector autoregressive (VAR) models. Using their forecasts, we predict the returns of government and corporate bonds, equities, REITs, and commodity futures. Our predictions were compared against two benchmarks: the historical mean, and an AR(1) model based on past returns. We employed the Diebold–Mariano test and the Model Confidence Set procedure to assess the comparative forecast accuracy. We found that Nelson–Siegel factors had significant predictive power for one-month-ahead returns of bonds, equities, and REITs, but not for commodity futures. However, for 6-month and 12-month-ahead forecasts, neither the AR(1) nor VAR(1) models based on Nelson–Siegel factors outperformed the benchmarks. These results suggest that the Nelson–Siegel factors affect the aggregate stochastic discount factor for pricing all assets traded in the US economy.
Forecasting temperature data with complex seasonality using time series methods
Predicting air temperature is crucial in climate change and global warming studies. Due to the significance of seasonal behaviour in weather, selecting a model capable of handling the temperature’s seasonal patterns is crucial. Seasonal fluctuations in high-frequency time-series data, such as daily data, are more complex and persist for extended periods, making them challenging for traditional seasonal forecasting models, which work best with monthly or quarterly data. The current study presents and evaluates state-of-the-art univariate time-series forecasting algorithms for high-frequency data with complex seasonal patterns. Four prediction methods were presented, and their effectiveness in predicting high-frequency temperatures was investigated. The models are dynamic harmonic regression, TBATS, Facebook Prophet, and MSTL–ETS. The study provides an empirical application based on daily time series data in Ada, USA, from 2017 to 2020. In addition, a simulation design of 1000 time series using the statistical properties of the real data was created in this study. The validation of the simulated data demonstrates that it has the same statistical properties as the real time series, especially for the annual seasonal pattern and the serial correlations in the long and short terms. The prediction accuracy of the models for both actual and simulated data was determined using the root mean squared error (RMSE) and multiple pairwise comparisons of the Diebold–Mariano test. The four approaches’ computational efficiency was evaluated using real and simulated data. The models’ residuals were checked to evaluate the capability of the presented approaches to use the largest amount of information available in the time series. The utility of incorporating the ARIMA process into some forecasting techniques to handle the stochastic process of innovations was proven. Also, the evaluation of time-varying estimation-based forecasting approaches was discussed.