Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
920 result(s) for "heteroskedasticity"
Sort by:
Distribution-Free Predictive Inference for Regression
We develop a general framework for distribution-free predictive inference in regression, using conformal inference. The proposed methodology allows for the construction of a prediction band for the response variable using any estimator of the regression function. The resulting prediction band preserves the consistency properties of the original estimator under standard assumptions, while guaranteeing finite-sample marginal coverage even when these assumptions do not hold. We analyze and compare, both empirically and theoretically, the two major variants of our conformal framework: full conformal inference and split conformal inference, along with a related jackknife method. These methods offer different tradeoffs between statistical accuracy (length of resulting prediction intervals) and computational efficiency. As extensions, we develop a method for constructing valid in-sample prediction intervals called rank-one-out conformal inference, which has essentially the same computational efficiency as split conformal inference. We also describe an extension of our procedures for producing prediction bands with locally varying length, to adapt to heteroscedasticity in the data. Finally, we propose a model-free notion of variable importance, called leave-one-covariate-out or LOCO inference. Accompanying this article is an R package conformalInference that implements all of the proposals we have introduced. In the spirit of reproducibility, all of our empirical results can also be easily (re)generated using this package.
HETEROSKEDASTIC PCA
A general framework for principal component analysis (PCA) in the presence of heteroskedastic noise is introduced. We propose an algorithm called HeteroPCA, which involves iteratively imputing the diagonal entries of the sample covariance matrix to remove estimation bias due to heteroskedasticity. This procedure is computationally efficient and provably optimal under the generalized spiked covariance model. A key technical step is a deterministic robust perturbation analysis on singular subspaces, which can be of independent interest. The effectiveness of the proposed algorithm is demonstrated in a suite of problems in high-dimensional statistics, including singular value decomposition (SVD) under heteroskedastic noise, Poisson PCA, and SVD for heteroskedastic and incomplete data.
Weak Instruments in Instrumental Variables Regression: Theory and Practice
When instruments are weakly correlated with endogenous regressors, conventional methods for instrumental variables (IV) estimation and inference become unreliable. A large literature in econometrics has developed procedures for detecting weak instruments and constructing robust confidence sets, but many of the results in this literature are limited to settings with independent and homoskedastic data, while data encountered in practice frequently violate these assumptions. We review the literature on weak instruments in linear IV regression with an emphasis on results for nonhomoskedastic (heteroskedastic, serially correlated, or clustered) data. To assess the practical importance of weak instruments, we also report tabulations and simulations based on a survey of papers published in the American Economic Review from 2014 to 2018 that use IV. These results suggest that weak instruments remain an important issue for empirical practice, and that there are simple steps that researchers can take to better handle weak instruments in applications.
THE SIZE-POWER TRADEOFF IN HAR INFERENCE
Heteroskedasticity- and autocorrelation-robust (HAR) inference in time series regression typically involves kernel estimation of the long-run variance. Conventional wisdom holds that, for a given kernel, the choice of truncation parameter trades off a test’s null rejection rate and power, and that this tradeoff differs across kernels. We formalize this intuition: using higher-order expansions, we provide a unified size-power frontier for both kernel and weighted orthonormal series tests using nonstandard “fixed-b” critical values. We also provide a frontier for the subset of these tests for which the fixed-b distribution is t or F. These frontiers are respectively achieved by the QS kernel and equal-weighted periodogram. The frontiers have simple closed-form expressions, which show that the price paid for restricting attention to tests with t and F critical values is small. The frontiers are derived for the Gaussian multivariate location model, but simulations suggest the qualitative findings extend to stochastic regressors.
Spillovers across global stock markets before and after the declaration of Russia’s invasion of Ukraine
Since the financial meltdown, studies on systemic risk and financial contagion have gained currency. Events like the COVID pandemic and the Russian invasion of Ukraine have fueled such an importance. This study examines the impact of the invasion on volatility transmissions across major stock markets worldwide. The stock indices considered in this study are ASX 200, ESTOXX 40, FTSE 100, HNGSNG, NIFTY 50, NIKKIE, and S&P 500. The work uses Vector Auto Regression (VAR) to study the transmission of returns. Later, the work performs Dynamic Conditional Covariance-Generalized Auto Regression Conditional Heteroskedasticity (DCC-GARCH) on the residuals where the transmission of returns was significant. The DCC-GARCH (E-GARCH) shows that all the asymmetric transmissions are negative. The study finds that co-movements of stock returns for the following pairs: ESTOXX 50-S&P 500, NIFTY 50-FTSE100, NIFTY 50-NIKKIE, NIKKIE-ESTOXX 50, S&P 500-NIFTY 50, and SP500-HNGSNG significantly intensified after the declaration of invasion. Such intensification of co-movements does establish the contagion effect triggered by invasion. The study shows that ESTOXX 50, which has the closest geographical proximity to the war zone, happens to be the highest generator of spillovers.
INFERENCE IN DIFFERENCES-IN-DIFFERENCES WITH FEW TREATED GROUPS AND HETEROSKEDASTICITY
We derive an inference method that works in differences-indifferences settings with few treated and many control groups in the presence of heteroskedasticity. As a leading example, we provide theoretical justification and empirical evidence that heteroskedasticity generated by variation in group sizes can invalidate existing inference methods, even in data sets with a large number of observations per group. In contrast, our inference method remains valid in this case. Our test can also be combined with feasible generalized least squares, providing a safeguard against misspecification of the serial correlation.
Bivariate Simulation of Potential Evapotranspiration Using Copula-GARCH Model
Developing statistical period and simulating the required values in case of data shortage increases certainty and reliability of simulations and statistical analyses, which is very important in studies on hydrology and water resources. Therefore, in this study, for simulating values of potential evapotranspiration at Birjand Station located in eastern Iran, contemporaneous autoregressive moving average (CARMA), CARMA-generalized autoregressive conditional heteroskedasticity (GARCH), and Copula-GARCH models were used in statistical period of 1984–2019. The potential evapotranspiration and relative humidity time series were simulated using these three models. CARMA model has acceptable accuracy for simulating potential evapotranspiration values due to the effect of the second parameter on simulations. Nash–Sutcliffe efficiency (NSE) coefficient of CARMA model for simulating potential evapotranspiration values was estimated as 0.85. NSE coefficient of CARMA-GARCH model was obtained as 0.87 through extracting residuals of CARMA model and simulating variance of data using GARCH model. Comparing the CARMA and CARMA-GARCH models with each other, it was concluded that a combination of two linear and non-linear time series models increases simulation accuracy to some extent. Using Clayton copula (the selected copula from the studied copulas), the mentioned values were simulated by Copula-GARCH model. The results showed that among the three models used, Copula-GARCH model reduced root mean square error of bivariate simulation compared to CARMA and CARMA-GARCH models by 15 and 13%, respectively. The results also showed that the proposed model simulates the average, first, and third quarters and range of changes in the data by 5 and 95% better than the two CARMA and CARMA-GARCH models.
ROBUST STANDARD ERRORS IN SMALL SAMPLES: SOME PRACTICAL ADVICE
We study the properties of heteroskedasticity-robust confidence intervals for regression parameters. We show that confidence intervals based on a degrees-of-freedom correction suggested by Bell and McCaffrey (2002) are a natural extension of a principled approach to the Behrens-Fisher problem. We suggest a further improvement for the case with clustering. We show that these standard errors can lead to substantial improvements in coverage rates even for samples with fifty or more clusters. We recommend that researchers routinely calculate the Bell-McCaffrey degrees-of-freedom adjustment to assess potential problems with conventional robust standard errors.
Estimation of high dimensional mean regression in the absence of symmetry and light tail assumptions
Data subject to heavy-tailed errors are commonly encountered in various scientific fields. To address this problem, procedures based on quantile regression and least absolute deviation regression have been developed in recent years. These methods essentially estimate the conditional median (or quantile) function. They can be very different from the conditional mean functions, especially when distributions are asymmetric and heteroscedastic. How can we efficiently estimate the mean regression functions in ultrahigh dimensional settings with existence of only the second moment? To solve this problem, we propose a penalized Huber loss with diverging parameter to reduce biases created by the traditional Huber loss. Such a penalized robust approximate (RA) quadratic loss will be called the RA lasso. In the ultrahigh dimensional setting, where the dimensionality can grow exponentially with the sample size, our results reveal that the RA lasso estimator produces a consistent estimator at the same rate as the optimal rate under the light tail situation. We further study the computational convergence of the RA lasso and show that the composite gradient descent algorithm indeed produces a solution that admits the same optimal rate after sufficient iterations. As a by-product, we also establish the concentration inequality for estimating the population mean when there is only the second moment. We compare the RA lasso with other regularized robust estimators based on quantile regression and least absolute deviation regression. Extensive simulation studies demonstrate the satisfactory finite sample performance of the RA lasso.
Inference in Linear Regression Models with Many Covariates and Heteroscedasticity
The linear regression model is widely used in empirical work in economics, statistics, and many other disciplines. Researchers often include many covariates in their linear model specification in an attempt to control for confounders. We give inference methods that allow for many covariates and heteroscedasticity. Our results are obtained using high-dimensional approximations, where the number of included covariates is allowed to grow as fast as the sample size. We find that all of the usual versions of Eicker-White heteroscedasticity consistent standard error estimators for linear models are inconsistent under this asymptotics. We then propose a new heteroscedasticity consistent standard error formula that is fully automatic and robust to both (conditional) heteroscedasticity of unknown form and the inclusion of possibly many covariates. We apply our findings to three settings: parametric linear models with many covariates, linear panel models with many fixed effects, and semiparametric semi-linear models with many technical regressors. Simulation evidence consistent with our theoretical results is provided, and the proposed methods are also illustrated with an empirical application. Supplementary materials for this article are available online.