Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
6,563 result(s) for "Parametric inference"
Sort by:
An Empirical Quantile Estimation Approach for Chance-Constrained Nonlinear Optimization Problems
We investigate an empirical quantile estimation approach to solve chance-constrained nonlinear optimization problems. Our approach is based on the reformulation of the chance constraint as an equivalent quantile constraint to provide stronger signals on the gradient. In this approach, the value of the quantile function is estimated empirically from samples drawn from the random parameters, and the gradient of the quantile function is estimated via a finite-difference approximation on top of the quantile-function-value estimation. We establish a convergence theory of this approach within the framework of an augmented Lagrangian method for solving general nonlinear constrained optimization problems. The foundation of the convergence analysis is a concentration property of the empirical quantile process, and the analysis is divided based on whether or not the quantile function is differentiable. In contrast to the sampling-and-smoothing approach used in the literature, the method developed in this paper does not involve any smoothing function and hence the quantile-function gradient approximation is easier to implement and there are less accuracy-control parameters to tune. We demonstrate the effectiveness of this approach and compare it with a smoothing method for the quantile-gradient estimation. Numerical investigation shows that the two approaches are competitive for certain problem instances.
Controlling the reinforcement in Bayesian non-parametric mixture models
The paper deals with the problem of determining the number of components in a mixture model. We take a Bayesian non-parametric approach and adopt a hierarchical model with a suitable non-parametric prior for the latent structure. A commonly used model for such a problem is the mixture of Dirichlet process model. Here, we replace the Dirichlet process with a more general non-parametric prior obtained from a generalized gamma process. The basic feature of this model is that it yields a partition structure for the latent variables which is of Gibbs type. This relates to the well-known (exchangeable) product partition models. If compared with the usual mixture of Dirichlet process model the advantage of the generalization that we are examining relies on the availability of an additional parameter σ belonging to the interval (0,1): it is shown that such a parameter greatly influences the clustering behaviour of the model. A value of σ that is close to 1 generates a large number of clusters, most of which are of small size. Then, a reinforcement mechanism which is driven by σ acts on the mass allocation by penalizing clusters of small size and favouring those few groups containing a large number of elements. These features turn out to be very useful in the context of mixture modelling. Since it is difficult to specify a priori the reinforcement rate, it is reasonable to specify a prior for σ. Hence, the strength of the reinforcement mechanism is controlled by the data.
Probabilistic multi-resolution scanning for two-sample differences
We propose a multi-resolution scanning approach to identifying two-sample differences. Windows of multiple scales are constructed through nested dyadic partitioning on the sample space and a hypothesis regarding the two-sample difference is defined on each window. Instead of testing the hypotheses on different windows independently, we adopt a joint graphical model, namely a Markov tree, on the null or alternative states of these hypotheses to incorporate spatial correlation across windows. The induced dependence allows borrowing strength across nearby and nested windows, which we show is critical for detecting high resolution local differences. We evaluate the performance of the method through simulation and show that it substantially outperforms other state of the art two-sample tests when the two-sample difference is local, involving only a small subset of the data. We then apply it to a flow cytometry data set from immunology, in which it successfully identifies highly local differences. In addition, we show how to control properly for multiple testing in a decision theoretic approach as well as how to summarize and report the inferred two-sample difference. We also construct hierarchical extensions of the framework to incorporate adaptivity into the construction of the scanning windows to improve inference further.
A time varying approach to the stock return–inflation puzzle
In the large literature on the stock return–inflation puzzle, existing works have used constant coefficient linear regression models or change point analysis with abrupt change points. Motivated by the time varying stock return–inflation relationship and the drawbacks of change point analysis, we propose to use the recently emerged locally stationary models to model stock return and inflation. Although the model exhibits non-parametric time varying dependence structure over a long time span, it has local stationarity within each small time interval. Detailed empirical analysis is conducted and comparisons are made between various approaches. We find that the stock return–inflation correlation is negative during early sample periods and turns positive during late sample periods, but the turning time point is different for the total inflation rate and core inflation rate.
Inference on 3D Procrustes Means: Tree Bole Growth, Rank Deficient Diffusion Tensors and Perturbation Models
The Central Limit Theorem (CLT) for extrinsic and intrinsic means on manifolds is extended to a generalization of Fréchet means. Examples are the Procrustes mean for 3D Kendall shapes as well as a mean introduced by Ziezold. This allows for one-sample tests previously not possible, and to numerically assess the 'inconsistency of the Procrustes mean' for a perturbation model and 'inconsistency' within a model recently proposed for diffusion tensor imaging. Also it is shown that the CLT can be extended to mildly rank deficient diffusion tensors. An application to forestry gives the temporal evolution of Douglas fir tree stems tending strongly towards cylinders at early ages and tending away with increased competition.
Non-parametric Bayesian inference of strategies in repeated games
Inferring underlying cooperative and competitive strategies from human behaviour in repeated games is important for accurately characterizing human behaviour and understanding how people reason strategically. Finite automata, a bounded model of computation, have been extensively used to compactly represent strategies for these games and are a standard tool in game theoretic analyses. However, inference over these strategies in repeated games is challenging since the number of possible strategies grows exponentially with the number of repetitions yet behavioural data are often sparse and noisy. As a result, previous approaches start by specifying a finite hypothesis space of automata that does not allow for flexibility. This limitation hinders the discovery of novel strategies that may be used by humans but are not anticipated a priori by current theory. Here we present a new probabilistic model for strategy inference in repeated games by exploiting non-parametric Bayesian modelling. With simulated data, we show that the model is effective at inferring the true strategy rapidly and from limited data, which leads to accurate predictions of future behaviour. When applied to experimental data of human behaviour in a repeated prisoner's dilemma, we uncover strategies of varying complexity and diversity.
The horseshoe estimator for sparse signals
This paper proposes a new approach to sparsity, called the horseshoe estimator, which arises from a prior based on multivariate-normal scale mixtures. We describe the estimator’s advantages over existing approaches, including its robustness, adaptivity to different sparsity patterns and analytical tractability. We prove two theorems: one that characterizes the horseshoe estimator’s tail robustness and the other that demonstrates a super-efficient rate of convergence to the correct estimate of the sampling density in sparse situations. Finally, using both real and simulated data, we show that the horseshoe estimator corresponds quite closely to the answers obtained by Bayesian model averaging under a point-mass mixture prior.
Non-parametric predictive inference for the validation of credit rating systems
Credit rating or credit scoring systems are important tools for estimating the obligor’s creditworthiness and for providing an indication of the obligor’s future status. The discriminatory power of a credit rating or credit scoring system refers to its ex ante ability to distinguish between two or more classes of borrowers. One of the most popular tools for the validation of the power of credit rating or credit scoring models to distinguish between two (or more) classes of borrowers is the receiver operating characteristic (ROC) curve (hypersurface) and its widely used overall summary, the area (hypervolume) under the curve (hypersurface). As the end goal of building such models is to predict and quantify uncertainty about future loans, prediction methods are especially valuable in this context. For this, non-parametric predictive inference is a promising candidate for such inference as it is a frequentist statistical method that is explicitly aimed at using few modelling assumptions, enabled through the use of lower and upper probabilities to quantify uncertainty. The aim of the paper is to introduce non-parametric predictive inference for ROC analysis within a banking context, for which novel results on ROC hypersurfaces for more than three groups are presented. Examples are provided to illustrate the method.
NEARLY UNBIASED VARIABLE SELECTION UNDER MINIMAX CONCAVE PENALTY
We propose MC+, a fast, continuous, nearly unbiased and accurate method of penalized variable selection in high-dimensional linear regression. The LASSO is fast and continuous, but biased. The bias of the LASSO may prevent consistent variable selection. Subset selection is unbiased but computationally costly. The MC+ has two elements: a minimax concave penalty (MCP) and a penalized linear unbiased selection (PLUS) algorithm. The MCP provides the convexity of the penalized loss in sparse regions to the greatest extent given certain thresholds for variable selection and unbiasedness. The PLUS computes multiple exact local minimizers of a possibly nonconvex penalized loss function in a certain main branch of the graph of critical points of the penalized loss. Its output is a continuous piecewise linear path encompassing from the origin for infinite penalty to a least squares solution for zero penalty. We prove that at a universal penalty level, the MC+ has high probability of matching the signs of the unknowns, and thus correct selection, without assuming the strong irrepresentable condition required by the LASSO. This selection consistency applies to the case of p ≫ n, and is proved to hold for exactly the MC+ solution among possibly many local minimizers. We prove that the MC+ attains certain minimax convergence rates in probability for the estimation of regression coefficients in l r balls. We use the SURE method to derive degrees of freedom and C p -type risk estimates for general penalized LSE, including the LASSO and MC+ estimators, and prove their unbiasedness. Based on the estimated degrees of freedom, we propose an estimator of the noise level for proper choice of the penalty level. For full rank designs and general sub-quadratic penalties, we provide necessary and sufficient conditions for the continuity of the penalized LSE. Simulation results overwhelmingly support our claim of superior variable selection properties and demonstrate the computational efficiency of the proposed method.
The Model Confidence Set
This paper introduces the model confidence set (MCS) and applies it to the selection of models. A MCS is a set of models that is constructed such that it will contain the best model with a given level of confidence. The MCS is in this sense analogous to a confidence interval for a parameter. The MCS acknowledges the limitations of the data, such that uninformative data yield a MCS with many models, whereas informative data yield a MCS with only a few models. The MCS procedure does not assume that a particular model is the true model; in fact, the MCS procedure can be used to compare more general objects, beyond the comparison of models. We apply the MCS procedure to two empirical problems. First, we revisit the inflation forecasting problem posed by Stock and Watson (1999), and compute the MCS for their set of inflation forecasts. Second, we compare a number of Taylor rule regressions and determine the MCS of the best regression in terms of in-sample likelihood criteria.