Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
5,059 result(s) for "Zeitreihenanalyse"
Sort by:
Applied Time Series Econometrics
Time series econometrics is a rapidly evolving field. Particularly, the cointegration revolution has had a substantial impact on applied analysis. Hence, no textbook has managed to cover the full range of methods in current use and explain how to proceed in applied domains. This gap in the literature motivates the present volume. The methods are sketched out, reminding the reader of the ideas underlying them and giving sufficient background for empirical work. The treatment can also be used as a textbook for a course on applied time series econometrics. Topics include: unit root and cointegration analysis, structural vector autoregressions, conditional heteroskedasticity and nonlinear and nonparametric time series models. Crucial to empirical work is the software that is available for analysis. New methodology is typically only gradually incorporated into existing software packages. Therefore a flexible Java interface has been created, allowing readers to replicate the applications and conduct their own analyses.
Modeling of covid-19 in Indonesia using vector autoregressive integrated moving average
A phenomenon of coronavirus became a big deal around the world at the end of December 2019. To find out how deadly the disease is, we can use the Case Fatality Rate (CFR), which provides the ratio number of deaths due to covid-19 between founded cases number of covid-19. However, studies to see the relationship between the number of cases and the number of deaths caused by covid-19 in Indonesia rarely done. Time Series analysis that can see how the relationship between the number of cases and the number of deaths due to covid-19 in Indonesia is Vector Autoregressive Integrated Moving Average analysis (VARIMA). Data used in this model must be qualified the stationary. For that reason, the transformation using differencing and logarithm on data must be performed to resolve non-stationary. The result shows the model that fulfilled all assumptions and had the smallest AICC value is VARIMA (1,1,1). The model shows the number of cases influenced by the number of cases and the number of deaths in the previous period. The same condition applies to the number of deaths affected by the number of deaths and the number of cases from the preceding period.
The Evolution of Corporate Cash
We study time-series and cross-firm variation in corporate cash holdings from 1920 to 2014. The recent increase in cash is not unique in magnitude. However, the recent divergence between average and aggregate cash is new and entirely driven by a shift in cash policies of newly public firms, whereas within-firm changes have been negative or flat since the 1940s. Cross-sectional relations between cash holdings and firm characteristics are stable throughout the century, though characteristics explain little of the trends in aggregate cash. Macroeconomic conditions, corporate profitability and investment, and (since 2000) repatriation taxes explain aggregate cash over the last century.
Illiquidity and Stock Returns II
Lou and Shu decompose Amihud’s illiquidity measure (ILLIQ) proposing that its component, the average of inverse dollar trading volume (IDVOL), is sufficient to explain the pricing of illiquidity. Their decomposition misses a component of ILLIQ that is related to illiquidity. We find that this component affects stock returns significantly, both in the crosssection and in time-series. We show that the ILLIQ premium is significantly positive after controlling for mispricing, sentiment, and seasonality. In addition, the aggregate market ILLIQ outperforms market IDVOL in estimating the effect of market illiquidity shocks on realized stock returns.
Anomalies and False Rejections
We use information from over 2 million trading strategies randomly generated using real data and from strategies that survive the publication process to infer the statistical properties of the set of strategies that could have been studied by researchers. Using this set, we compute t-statistic thresholds that control for multiple hypothesis testing, when searching for anomalies, at 3.8 and 3.4 for time-series and cross-sectional regressions, respectively. We estimate the expected proportion of false rejections that researchers would produce if they failed to account for multiple hypothesis testing to be about 45%.
Factors That Fit the Time Series and Cross-Section of Stock Returns
We propose a new method for estimating latent asset pricing factors that fit the time series and cross-section of expected returns. Our estimator generalizes principal component analysis (PCA) by including a penalty on the pricing error in expected returns. Our approach finds weak factors with high Sharpe ratios that PCA cannot detect. We discover five factors with economic meaning that explain well the cross-section and time series of characteristic-sorted portfolio returns. The out-of-sample maximum Sharpe ratio of our factors is twice as large as with PCA with substantially smaller pricing errors. Our factors imply that a significant amount of characteristic information is redundant.
and the Cross-Section of Expected Returns
Hundreds of papers and factors attempt to explain the cross-section of expected returns. Given this extensive data mining, it does not make sense to use the usual criteria for establishing significance. Which hurdle should be used for current research? Our paper introduces a new multiple testing framework and provides historical cutoffs from the first empirical tests in 1967 to today. A new factor needs to clear a much higher hurdle, with a t-statistic greater than 3.0. We argue that most claimed research findings in financial economics are likely false.
WHY YOU SHOULD NEVER USE THE HODRICK-PRESCOTT FILTER
Here’s why. (a) The Hodrick-Prescott (HP) filter introduces spurious dynamic relations that have no basis in the underlying data-generating process. (b) Filtered values at the end of the sample are very different from those in the middle and are also characterized by spurious dynamics. (c) A statistical formalization of the problem typically produces values for the smoothing parameter vastly at odds with common practice. (d) There is a better alternative. A regression of the variable at date t on the four most recent values as of date t − h achieves all the objectives sought by users of the HP filter with none of its drawbacks.