Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
67 result(s) for "Misspecification testing"
Sort by:
Independent Component Analysis via Distance Covariance
This article introduces a novel statistical framework for independent component analysis (ICA) of multivariate data. We propose methodology for estimating mutually independent components, and a versatile resampling-based procedure for inference, including misspecification testing. Independent components are estimated by combining a nonparametric probability integral transformation with a generalized nonparametric whitening method based on distance covariance that simultaneously minimizes all forms of dependence among the components. We prove the consistency of our estimator under minimal regularity conditions and detail conditions for consistency under model misspecification, all while placing assumptions on the observations directly, not on the latent components. U statistics of certain Euclidean distances between sample elements are combined to construct a test statistic for mutually independent components. The proposed measures and tests are based on both necessary and sufficient conditions for mutual independence. We demonstrate the improvements of the proposed method over several competing methods in simulation studies, and we apply the proposed ICA approach to two real examples and contrast it with principal component analysis.
Frequentist Model-based Statistical Induction and the Replication Crisis
The prevailing view in the current replication crisis literature is that the non-replicability of published empirical studies (a) confirms their untrustworthiness, and (b) the primary source of that is the abuse of frequentist testing, in general, and the p-value in particular. The main objective of the paper is to challenge both of these claims and make a case that (a) non-replicability does not necessarily imply untrustworthiness and (b) the abuses of frequentist testing are only symptomatic of a much broader problem relating to the uninformed and recipe-like implementation of statistical modeling and inference that contributes significantly to untrustworthy evidence. It is argued that the crucial contributors to the untrustworthiness relate (directly or indirectly) to the inadequate understanding and implementation of the stipulations required for model-based statistical induction to give rise to trustworthy evidence. It is argued that these preconditions relate to securing reliable ‘learning from data’ about phenomena of interest and pertain to the nature, origin, and justification of genuine empirical knowledge, as opposed to beliefs, conjectures, and opinions.
Error statistical modeling and inference: Where methodology meets ontology
In empirical modeling, an important desiderata for deeming theoretical entities and processes as real is that they can be reproducible in a statistical sense. Current day crises regarding replicability in science intertwines with the question of how statistical methods link data to statistical and substantive theories and models. Different answers to this question have important methodological consequences for inference, which are intertwined with a contrast between the ontological commitments of the two types of models. The key to untangling them is the realization that behind every substantive model there is a statistical model that pertains exclusively to the probabilistic assumptions imposed on the data. It is not that the methodology determines whether to be a realist about entities and processes in a substantive field. It is rather that the substantive and statistical models refer to different entities and processes, and therefore call for different criteria of adequacy.
Replication to assess statistical adequacy
\"Statistical adequacy\" is an important prerequisite for securing reliable inference in empirical modelling. This paper argues for more emphasis on replication that specifically assesses whether the results reported in empirical studies are based on statistically adequate models, i.e., models with valid underpinning statistical assumptions that pass relevant diagnostic tests for misspecification. A replication plan is briefly outlined to illustrate what this would involve in practice in the context of a specific study by Acemoglu, Gallego and Robinson (Institutions, human capital, and development, Annual Review of Economics, 2014).
Nonparametric neural network modeling of hedonic prices in the housing market
This article addresses the contribution to hedonic modeling of a nonparametric approach based on artificial neural network (ANN) regressions. ANNs provide consistent estimates for the hedonic price of each attribute and permit a number of hypotheses on the hedonic price relationship to be tested nonparametrically. In particular, we exploit results by Stinchcombe and White (Econom Theory 14:295–324, 1998 ) in order to carry out misspecification testing in linear and semiloglinear hedonic models. The same approach directly applies to testing misspecification of any parametric specification for the hedonic relationship. A nonparametric significance test for the variables in the hedonic model is also proposed. The test extends the approach developed by Racine (J Bus Econ Stat 15(3):369–378, 1997 ) in kernel-based nonparametric testing to ANN-based inference. The finite sample performance of the proposed tests is analyzed through Monte Carlo experiments, and simulation-based algorithms for computation of the null distribution of the tests are proposed. Then, the performance of three classes of regression models—linear, semi-log, and ANNs—applied to hedonic price modeling in a Spanish regional housing market is compared. Our results indicate the presence of nonlinear behavior, as predicted by economic theory, with the ANN-based tests detecting statistically significant evidence of misspecification—both in the linear and the semilog specifications—and ANN regressions providing moderate improvement of predictive performance.
Building better econometric models using cross section and panel data
Many empirical researchers yearn for an econometric model that better explains their data. Yet these researchers rarely pursue this objective for fear of the statistical complexities involved in specifying that model. This book is intended to alleviate those anxieties by providing a practical methodology that anyone familiar with regression analysis can employ--a methodology that will yield a model that is both more informative and is a better representation of the data. Most empirical researchers have been taught in their undergraduate econometrics courses about statistical misspecification testing and respecification. But the impact these techniques can have on the inference that is drawn from their results is often overlooked. In academia, students are typically expected to explore their research hypotheses within the context of theoretical model specification while ignoring the underlying statistics. Company executives and managers, by contrast, seek results that are immediately comprehensible and applicable, while remaining indifferent to the underlying properties and econometric calculations that lead to these results. This book outlines simple, practical procedures that can be used to specify a better model; that is to say, a model that better explains the data. Such procedures employ the use of purely statistical techniques performed upon a publicly available data set, which allows readers to follow along at every stage of the procedure. Using the econometric software Stata (though most other statistical software packages can be used as well), this book shows how to test for model misspecification, and how to respecify these models in a practical way that not only enhances the inference drawn from the results, but adds a level of robustness that can increase the confidence a researcher has in the output that has been generated. By following this procedure, researchers will be led to a better, more finely tuned empirical model that yields better results.
Misspecification testing: a comprehensive approach
Misspecification tests of individual assumptions underlying regression models often lead to erroneous conclusions regarding source of misspecification. Monte Carlo experiments demonstrate that a comprehensive set of individual and joint tests reduces the likelihood of such conclusions. A practical testing strategy is proposed and suggestions made regarding its implementation.
Building Better Econometric Models Using Cross Section and Panel Data
Many empirical researchers yearn for an econometricmodel that better explains their data. Yet these researchersrarely pursue this objective for fear of thestatistical complexities involved in specifying thatmodel. This book is intended to alleviate those anxietiesby providing a practical methodology that anyonefamiliar with regression analysis can employ-amethodology that will yield a model that is both moreinformative and is a better representation of the data.This book outlines simple, practical proceduresthat can be used to specify a model that better explainsthe data. Such procedures emplo
System misspecification testing and structural change in the demand for meats
A misspecification testing strategy designed to ensure that the statistical assumptions underlying a system of equations are appropriate is outlined. The system tests take into account information in, and interactions between, all equations in the system and can be used in a wide variety of applications where systems of equations are estimated. The system testing approach is demonstrated by modeling U.S. consumer demand for meats. The example illustrates how the approach can be used to disentangle issues regarding structural change and other forms of model misspecification.
Time Series and Dynamic Models
This chapter contains section titled: INTRODUCTION A PROBABILISTIC FRAMEWORK FOR TIME SERISE AUTOREGRESSIVE MODELS: UNIVARIATE MOVING AVERAGE MODELS ARMA TYPE MODELS: MULTIVARIATE TIME SERIES AND LINEAR REGRESSION MODELS CONCLUSION