Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
9
result(s) for
"ICMS Highlights: Advances in MCMC"
Sort by:
An Adaptive Parallel Tempering Algorithm
by
Vihola, Matti
,
Moulines, Eric
,
Miasojedow, Błażej
in
Adaptive MCMC
,
Algorithms
,
ICMS Highlights: Advances in MCMC
2013
Parallel tempering is a generic Markov chain Monte Carlo sampling method which allows good mixing with multimodal target distributions, where conventional Metropolis-Hastings algorithms often fail. The mixing properties of the sampler depend strongly on the choice of tuning parameters, such as the temperature schedule and the proposal distribution used for local exploration. We propose an adaptive algorithm with fixed number of temperatures which tunes both the temperature schedule and the parameters of the random-walk Metropolis kernel automatically. We prove the convergence of the adaptation and a strong law of large numbers for the algorithm under general conditions. We also prove as a side result the geometric ergodicity of the parallel tempering algorithm. We illustrate the performance of our method with examples. Our empirical findings indicate that the algorithm can cope well with different kinds of scenarios without prior tuning. Supplementary materials including the proofs and the Matlab implementation are available online.
Journal Article
Data Augmentation for Diffusions
by
Roberts, Gareth O.
,
Papaspiliopoulos, Omiros
,
Stramer, Osnat
in
Bayesian analysis
,
Diffusion
,
Euler discretization
2013
The problem of formal likelihood-based (either classical or Bayesian) inference for discretely observed multidimensional diffusions is particularly challenging. In principle, this involves data augmentation of the observation data to give representations of the entire diffusion trajectory. Most currently proposed methodology splits broadly into two classes: either through the discretization of idealized approaches for the continuous-time diffusion setup or through the use of standard finite-dimensional methodologies discretization of the diffusion model. The connections between these approaches have not been well studied. This article provides a unified framework that brings together these approaches, demonstrating connections, and in some cases surprising differences. As a result, we provide, for the first time, theoretical justification for the various methods of imputing missing data. The inference problems are particularly challenging for irreducible diffusions, and our framework is correspondingly more complex in that case. Therefore, we treat the reducible and irreducible cases differently within the article. Supplementary materials for the article are available online.
Journal Article
Annealed Importance Sampling Reversible Jump MCMC Algorithms
2013
We develop a methodology to efficiently implement the reversible jump Markov chain Monte Carlo (RJ-MCMC) algorithms of Green, applicable for example to model selection inference in a Bayesian framework, which builds on the \"dragging fast variables\" ideas of Neal. We call such algorithms annealed importance sampling reversible jump (aisRJ). The proposed procedures can be thought of as being exact approximations of idealized RJ algorithms which in a model selection problem would sample the model labels only, but cannot be implemented. Central to the methodology is the idea of bridging different models with fictitious intermediate models, whose role is to introduce smooth intermodel transitions and, as we shall see, improve performance. Efficiency of the resulting algorithms is demonstrated on two standard model selection problems and we show that despite the additional computational effort incurred, the approach can be highly competitive computationally. Supplementary materials for the article are available online.
Journal Article
Evidence and Bayes Factor Estimation for Gibbs Random Fields
2013
Gibbs random fields play an important role in statistics. However, they are complicated to work with due to an intractability of the likelihood function and there has been much work devoted to finding computational algorithms to allow Bayesian inference to be conducted for such so-called doubly intractable distributions. This article extends this work and addresses the issue of estimating the evidence and Bayes factor for such models. The approach that we develop is shown to yield good performance. Supplementary materials for this article are available online.
Journal Article
On a Class of Shrinkage Priors for Covariance Matrix Estimation
2013
We propose a flexible class of models based on scale mixture of uniform distributions to construct shrinkage priors for covariance matrix estimation. This new class of priors enjoys a number of advantages over the traditional scale mixture of normal priors, including its simplicity and flexibility in characterizing the prior density. We also exhibit a simple, easy to implement Gibbs sampler for posterior simulation, which leads to efficient estimation in high-dimensional problems. We first discuss the theory and computational details of this new approach and then extend the basic model to a new class of multivariate conditional autoregressive models for analyzing multivariate areal data. The proposed spatial model flexibly characterizes both the spatial and the outcome correlation structures at an appealing computational cost. Examples consisting of both synthetic and real-world data show the utility of this new framework in terms of robust estimation as well as improved predictive performance. Supplementary materials are available online.
Journal Article
Computational Aspects of Bayesian Spectral Density Estimation
by
Chopin, N.
,
Liseo, B.
,
Rousseau, J.
in
Approximation
,
Bayesian analysis
,
Estimating techniques
2013
Gaussian time-series models are often specified through their spectral density. Such models present several computational challenges, in particular because of the nonsparse nature of the covariance matrix. We derive a fast approximation of the likelihood for such models. We propose to sample from the approximate posterior (i.e., the prior times the approximate likelihood), and then to recover the exact posterior through importance sampling. We show that the variance of the importance sampling weights vanishes as the sample size goes to infinity. We explain why the approximate posterior may typically be multimodal, and we derive a Sequential Monte Carlo sampler based on an annealing sequence to sample from that target distribution. Performance of the overall approach is evaluated on simulated and real datasets. In addition, for one real-world dataset, we provide some numerical evidence that a Bayesian approach to semiparametric estimation of spectral density may provide more reasonable results than its frequentist counterparts. The article comes with supplementary materials, available online, that contain an Appendix with a proof of our main Theorem, a Python package that implements the proposed procedure, and the Ethernet dataset.
Journal Article
Bayesian Inference on a Mixture Model With Spatial Dependence
by
Cucala, Lionel
,
Marin, Jean-michel
in
Approximation
,
Auxiliary Markov chain Monte Carlo schemes
,
Bayesian analysis
2013
We introduce a new technique to select the number of components of a mixture model with spatial dependence. The method consists of an estimation of the integrated completed likelihood based on a Laplace's approximation and a new technique to deal with the normalizing constant intractability of the hidden Potts model. Our proposal is applied to a real satellite image. Supplementary materials are available online.
Journal Article
A Bayesian Semiparametric Multiplicative Error Model With an Application to Realized Volatility
by
Mira, Antonietta
,
Solgi, Reza
in
Bayesian analysis
,
Dirichlet problem
,
Dirichlet process mixture model
2013
A semiparametric multiplicative error model (MEM) is proposed. In traditional MEM, the innovations are typically assumed to be Gamma distributed (with one free parameter that ensures unit mean of the innovations and thus identifiability of the model), however empirical investigations unveil the inappropriateness of this choice. In the proposed approach, the conditional mean of the time series is modeled parametrically, while we model its conditional distribution nonparametrically by Dirichlet process mixture of Gamma distributions. Bayesian inference is performed using Markov chain Monte Carlo simulation. This model is applied to the time series of daily realized volatility of some indices, and is compared to similar parametric models available in the literature. Our simulations and empirical studies show better predictive performance, flexibility, and robustness to misspecification of our Bayesian semiparametric approach. Supplemental materials for this article are available online.
Journal Article
Thank God That Regressing Y on X is Not the Same as Regressing X on Y: Direct and Indirect Residual Augmentations
by
Xu, Xiaojin
,
Meng, Xiao-Li
,
Yu, Yaming
in
Algorithms
,
Ancillary-Sufficient Interweaving Strategy (ASIS)
,
Conditional augmentation
2013
What does regressing Y on X versus regressing X on Y have to do with Markov chain Monte Carlo (MCMC)? It turns out that many strategies for speeding up data augmentation (DA) type algorithms can be understood as fostering independence or \"de-correlation\" between a regression function and the corresponding residual, thereby reducing or even eliminating dependence among MCMC iterates. There are two general classes of algorithms, those corresponding to regressing parameters on augmented data/auxiliary variables and those that operate the other way around. The interweaving strategy of Yu and Meng provides a general recipe to automatically take advantage of both, and it is the existence of two different types of residuals that makes the interweaving strategy seemingly magical in some cases and promising in general. The concept of residuals-which depends on actual data-also highlights the potential for substantial improvements when DA schemes are allowed to depend on the observed data. At the same time, there is an intriguing phase transition type of phenomenon regarding choosing (partially) residual augmentation schemes, reminding us once more of the prevailing issue of trade-off between robustness and efficiency. This article reports on these latest theoretical investigations (using a class of normal/independence models) and empirical findings (using a posterior sampling for a probit regression) in the search for effective residual augmentations-and ultimately more MCMC algorithms-that meet the 3-S criterion: simple, stable, and speedy. Supplementary materials for the article are available online.
Journal Article