Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
502
result(s) for
"sequential Monte Carlo"
Sort by:
Sequential quasi Monte Carlo
2015
We derive and study sequential quasi Monte Carlo (SQMC), a class of algorithms obtained by introducing QMC point sets in particle filtering. SQMC is related to, and may be seen as an extension of, the array‐RQMC algorithm of L'Ecuyer and his colleagues. The complexity of SQMC is O{Nlog(N)}, where N is the number of simulations at each iteration, and its error rate is smaller than the Monte Carlo rate OP(N−1/2). The only requirement to implement SQMC algorithms is the ability to write the simulation of particle xtn given xt−1n as a deterministic function of xt−1n and a fixed number of uniform variates. We show that SQMC is amenable to the same extensions as standard SMC, such as forward smoothing, backward smoothing and unbiased likelihood evaluation. In particular, SQMC may replace SMC within a particle Markov chain Monte Carlo algorithm. We establish several convergence results. We provide numerical evidence that SQMC may significantly outperform SMC in practical scenarios.
Journal Article
Uniform Ergodicity of the Particle Gibbs Sampler
by
Lindsten, Fredrik
,
Moulines, Eric
,
Douc, Randal
in
Computer simulation
,
conditional sequential Monte Carlo
,
conditional sequential Monte Carlo, particle Gibbs, particle Markov chain Monte Carlo, particle smoothing, state space models
2015
The particle Gibbs sampler is a systematic way of using a particle filter within Markov chain Monte Carlo. This results in an off-the-shelf Markov kernel on the space of state trajectories, which can be used to simulate from the full joint smoothing distribution for a state space model in a Markov chain Monte Carlo scheme. We show that the particle Gibbs Markov kernel is uniformly ergodic under rather general assumptions, which we will carefully review and discuss. In particular, we provide an explicit rate of convergence, which reveals that (i) for fixed number of data points, the convergence rate can be made arbitrarily good by increasing the number of particles and (ii) under general mixing assumptions, the convergence rate can be kept constant by increasing the number of particles superlinearly with the number of observations. We illustrate the applicability of our result by studying in detail a common stochastic volatility model with a non-compact state space.
Journal Article
Approximate Bayesian inference for discretely observed continuous-time multi-state models
2019
Inference for continuous time multi-state models presents considerable computational difficulties when the process is only observed at discrete time points with no additional information about the state transitions. In fact, for general multi-state Markov model, evaluation of the likelihood function is possible only via intensive numerical approximations. Moreover, in real applications, transitions between states may depend on the time since entry into the current state, and semi-Markov models, where the likelihood function is not available in closed form, should be fitted to the data. Approximate Bayesian Computation (ABC) methods, which make use only of comparisons between simulated and observed summary statistics, represent a solution to intractable likelihood problems and provide alternative algorithms when the likelihood calculation is computationally too costly. In this article we investigate the potentiality of ABC techniques for multi-state models both to obtain the posterior distributions of the model parameters and to compare Markov and semi-Markov models. In addition, we will also exploit ABC methods to estimate and compare hidden Markov and semi-Markov models when observed states are subject to classification errors. We illustrate the performance of the ABC methodology both with simulated data and with a real data example.
Journal Article
Extensible grids: uniform sampling on a space filling curve
2016
We study the properties of points in [0, 1]d generated by applying Hilbert's space filling curve to uniformly distributed points in [0, 1]. For deterministic sampling we obtain a discrepancy of O(n–1/d) for d ≥ 2. For random stratified sampling, and scrambled van der Corput points, we derive a mean-squared error of O(n–1–2/d) for integration of Lipschitz continuous integrands, when d ≥ 3. These rates are the same as those obtained by sampling on d-dimensional grids and they show a deterioration with increasing d. The rate for Lipschitz functions is, however, the best possible at that level of smoothness and is better than plain independent and identically distributed sampling. Unlike grids, space filling curve sampling provides points at any desired sample size, and the van der Corput version is extensible in n. We also introduce a class of piecewise Lipschitz functions whose discontinuities are in rectifiable sets described via Minkowski content. Although these functions may have infinite variation in the sense of Hardy and Krause, they can be integrated with a mean-squared error of O(n–1–1/d). It was previously known only that the rate was o(n–1). Other space filling curves, such as those due to Sierpinski and Peano, also attain these rates, whereas upper bounds for the Lebesgue curve are somewhat worse, as if the dimension were log2(3) times as high.
Journal Article
How to Avoid the Curse of Dimensionality: Scalability of Particle Filters with and without Importance Weights
Particle filters are a popular and flexible class of numerical algorithms to solve a large class of nonlinear filtering problems. However, standard particle filters with importance weights have been shown to require a sample size that increases exponentially with the dimension D of the state space in order to achieve a certain performance, which precludes their use in very high-dimensional filtering problems. Here, we focus on the dynamic aspect of this \"curse of dimensionality\" (COD) in continuous-time filtering, which is caused by the degeneracy of importance weights over time. We show that the degeneracy occurs on a time scale that decreases with increasing D. In order to soften the effects of weight degeneracy, most particle filters use particle resampling and improved proposal functions for the particle motion. We explain why neither of the two can prevent the COD in general. In order to address this fundamental problem, we investigate an existing filtering algorithm based on optimal feedback control that sidesteps the use of importance weights. We use numerical experiments to show that this feedback particle filter (FPF) by [T. Yang, P. G. Mehta, and S. P. Meyn, IEEE Trans. Automat. Control, 58 (2013), pp. 2465-2480] does not exhibit a COD.
Journal Article
On Particle Methods for Parameter Estimation in State-Space Models
by
Doucet, Arnaud
,
Singh, Sumeetpal S.
,
Maciejowski, Jan
in
Algorithms
,
Approximation
,
Bayesian inference
2015
Nonlinear non-Gaussian state-space models are ubiquitous in statistics, econometrics, information engineering and signal processing. Particle methods, also known as Sequential Monte Carlo (SMC) methods, provide reliable numerical approximations to the associated state inference problems. However, in most applications, the state-space model of interest also depends on unknown static parameters that need to be estimated from the data. In this context, standard particle methods fail and it is necessary to rely on more sophisticated algorithms. The aim of this paper is to present a comprehensive review of particle methods that have been proposed to perform static parameter estimation in state-space models. We discuss the advantages and limitations of these methods and illustrate their performance on simple models.
Journal Article
Particle Markov chain Monte Carlo methods
by
Doucet, Arnaud
,
Holenstein, Roman
,
Andrieu, Christophe
in
Algorithms
,
Approximation
,
Bayesian analysis
2010
Markov chain Monte Carlo and sequential Monte Carlo methods have emerged as the two main tools to sample from high dimensional probability distributions. Although asymptotic convergence of Markov chain Monte Carlo algorithms is ensured under weak assumptions, the performance of these algorithms is unreliable when the proposal distributions that are used to explore the space are poorly chosen and/or if highly correlated variables are updated independently. We show here how it is possible to build efficient high dimensional proposal distributions by using sequential Monte Carlo methods. This allows us not only to improve over standard Markov chain Monte Carlo schemes but also to make Bayesian inference feasible for a large class of statistical models where this was not previously so. We demonstrate these algorithms on a non-linear state space model and a Lévy-driven stochastic volatility model.
Journal Article
The Iterated Auxiliary Particle Filter
by
Guarniero, Pieralberto
,
Johansen, Adam M.
,
Lee, Anthony
in
Algorithms
,
Bias
,
Computer simulation
2017
We present an offline, iterated particle filter to facilitate statistical inference in general state space hidden Markov models. Given a model and a sequence of observations, the associated marginal likelihood L is central to likelihood-based inference for unknown statistical parameters. We define a class of \"twisted\" models: each member is specified by a sequence of positive functions
and has an associated
-auxiliary particle filter that provides unbiased estimates of L. We identify a sequence
that is optimal in the sense that the
-auxiliary particle filter's estimate of L has zero variance. In practical applications,
is unknown so the
-auxiliary particle filter cannot straightforwardly be implemented. We use an iterative scheme to approximate
and demonstrate empirically that the resulting iterated auxiliary particle filter significantly outperforms the bootstrap particle filter in challenging settings. Applications include parameter estimation using a particle Markov chain Monte Carlo algorithm.
Journal Article
SMC2: an efficient algorithm for sequential analysis of state space models
by
Jacob, P. E.
,
Papaspiliopoulos, O.
,
Chopin, N.
in
Algorithms
,
Bayesian analysis
,
Bayesian method
2013
We consider the generic problem of performing sequential Bayesian inference in a state space model with observation process y, state process x and fixed parameter θ. An idealized approach would be to apply the iterated batch importance sampling algorithm of Chopin. This is a sequential Monte Carlo algorithm in the θ-dimension, that samples values of θ, reweights iteratively these values by using the likelihood increments p(yt|y1:t–1,θ) and rejuvenates the θ-particles through a resampling step and a Markov chain Monte Carlo update step. In state space models these likelihood increments are intractable in most cases, but they may be unbiasedly estimated by a particle filter in the x-dimension, for any fixed θ. This motivates the SMC2 algorithm that is proposed in the paper: a sequential Monte Carlo algorithm, defined in the θ-dimension, which propagates and resamples many particle filters in the x-dimension. The filters in the x-dimension are an example of the random weight particle filter. In contrast, the particle Markov chain Monte Carlo framework that has been developed by Andrieu and colleagues allows us to design appropriate Markov chain Monte Carlo rejuvenation steps. Thus, the θ-particles target the correct posterior distribution at each iteration t, despite the intractability of the likelihood increments. We explore the applicability of our algorithm in both sequential and non-sequential applications and consider various degrees of freedom, as for example increasing dynamically the number of x-particles. We contrast our approach with various competing methods, both conceptually and empirically through a detailed simulation study, and based on particularly challenging examples.
Journal Article
BAYESIAN INFERENCE FOR MULTIPLE GAUSSIAN GRAPHICAL MODELS WITH APPLICATION TO METABOLIC ASSOCIATION NETWORKS
2017
We investigate the effect of cadmium (a toxic environmental pollutant) on the correlation structure of a number of urinary metabolites using Gaussian graphical models (GGMs). The inferred metabolic associations can provide important information on the physiological state of a metabolic system and insights on complex metabolic relationships. Using the fitted GGMs, we construct differential networks, which highlight significant changes in metabolite interactions under different experimental conditions. The analysis of such metabolic association networks can reveal differences in the underlying biological reactions caused by cadmium exposure. We consider Bayesian inference and propose using the multiplicative (or Chung–Lu random graph) model as a prior on the graphical space. In the multiplicative model, each edge is chosen independently with probability equal to the product of the connectivities of the end nodes. This class of prior is parsimonious yet highly flexible; it can be used to encourage sparsity or graphs with a pre-specified degree distribution when such prior knowledge is available. We extend the multiplicative model to multiple GGMs linking the probability of edge inclusion through logistic regression and demonstrate how this leads to joint inference for multiple GGMs. A sequential Monte Carlo (SMC) algorithm is developed for estimating the posterior distribution of the graphs.
Journal Article