Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
48,304
result(s) for
"Markov chains"
Sort by:
Dynamics of the Box-Ball System with Random Initial Conditions via Pitman’s Transformation
by
Tsujimoto, Satoshi
,
Croydon, David A.
,
Sasada, Makiko
in
Cellular automata
,
Dynamical systems and ergodic theory -- Topological dynamics -- Cellular automata msc
,
Ergodic theory
2023
The box-ball system (BBS), introduced by Takahashi and Satsuma in 1990, is a cellular automaton that exhibits solitonic behaviour. In
this article, we study the BBS when started from a random two-sided infinite particle configuration. For such a model, Ferrari et al.
recently showed the invariance in distribution of Bernoulli product measures with density strictly less than
Ergodicity of Markov Processes via Nonstandard Analysis
by
Duanmu, Haosui
,
Weiss, William
,
Rosenthal, Jeffrey S.
in
Ergodic theory
,
Markov processes
,
Nonstandard mathematical analysis
2021
The Markov chain ergodic theorem is well-understood if either the time-line or the state space is discrete. However, there does not
exist a very clear result for general state space continuous-time Markov processes. Using methods from mathematical logic and
nonstandard analysis, we introduce a class of hyperfinite Markov processes-namely, general Markov processes which behave like finite
state space discrete-time Markov processes. We show that, under moderate conditions, the transition probability of hyperfinite Markov
processes align with the transition probability of standard Markov processes. The Markov chain ergodic theorem for hyperfinite Markov
processes will then imply the Markov chain ergodic theorem for general state space continuous-time Markov processes.
Posterior Summarization in Bayesian Phylogenetics Using Tracer 1.7
2018
Bayesian inference of phylogeny using Markov chain Monte Carlo (MCMC) plays a central role in understanding evolutionary history from molecular sequence data. Visualizing and analyzing the MCMC-generated samples from the posterior distribution is a key step in any non-trivial Bayesian inference. We present the software package Tracer (version 1.7) for visualizing and analyzing the MCMC trace files generated through Bayesian phylogenetic inference. Tracer provides kernel density estimation, multivariate visualization, demographic trajectory reconstruction, conditional posterior distribution summary, and more. Tracer is open-source and available at http://beast.community/tracer.
Journal Article
Unbiased Markov chain Monte Carlo methods with couplings
by
O’Leary, John
,
Jacob, Pierre E.
,
Atchadé, Yves F.
in
Algorithms
,
Approximation
,
Bayesian analysis
2020
Markov chain Monte Carlo (MCMC) methods provide consistent approximations of integrals as the number of iterations goes to ∞. MCMC estimators are generally biased after any fixed number of iterations. We propose to remove this bias by using couplings of Markov chains together with a telescopic sum argument of Glynn and Rhee. The resulting unbiased estimators can be computed independently in parallel.We discuss practical couplings for popular MCMC algorithms.We establish the theoretical validity of the estimators proposed and study their efficiency relative to the underlying MCMC algorithms. Finally, we illustrate the performance and limitations of the method on toy examples, on an Ising model around its critical temperature, on a high dimensional variable-selection problem, and on an approximation of the cut distribution arising in Bayesian inference for models made of multiple modules.
Journal Article
The Bouncy Particle Sampler: A Nonreversible Rejection-Free Markov Chain Monte Carlo Method
by
Doucet, Arnaud
,
Bouchard-Côté, Alexandre
,
Vollmer, Sebastian J.
in
Algorithms
,
Alternative approaches
,
Bayesian analysis
2018
Many Markov chain Monte Carlo techniques currently available rely on discrete-time reversible Markov processes whose transition kernels are variations of the Metropolis-Hastings algorithm. We explore and generalize an alternative scheme recently introduced in the physics literature (Peters and de With 2012) where the target distribution is explored using a continuous-time nonreversible piecewise-deterministic Markov process. In the Metropolis-Hastings algorithm, a trial move to a region of lower target density, equivalently of higher \"energy,\" than the current state can be rejected with positive probability. In this alternative approach, a particle moves along straight lines around the space and, when facing a high energy barrier, it is not rejected but its path is modified by bouncing against this barrier. By reformulating this algorithm using inhomogeneous Poisson processes, we exploit standard sampling techniques to simulate exactly this Markov process in a wide range of scenarios of interest. Additionally, when the target distribution is given by a product of factors dependent only on subsets of the state variables, such as the posterior distribution associated with a probabilistic graphical model, this method can be modified to take advantage of this structure by allowing computationally cheaper \"local\" bounces, which only involve the state variables associated with a factor, while the other state variables keep on evolving. In this context, by leveraging techniques from chemical kinetics, we propose several computationally efficient implementations. Experimentally, this new class of Markov chain Monte Carlo schemes compares favorably to state-of-the-art methods on various Bayesian inference tasks, including for high-dimensional models and large datasets. Supplementary materials for this article are available online.
Journal Article
Variational Inference: A Review for Statisticians
by
Blei, David M.
,
Kucukelbir, Alp
,
McAuliffe, Jon D.
in
Algorithms
,
Americans
,
artificial intelligence
2017
One of the core problems of modern statistics is to approximate difficult-to-compute probability densities. This problem is especially important in Bayesian statistics, which frames all inference about unknown quantities as a calculation involving the posterior density. In this article, we review variational inference (VI), a method from machine learning that approximates probability densities through optimization. VI has been used in many applications and tends to be faster than classical methods, such as Markov chain Monte Carlo sampling. The idea behind VI is to first posit a family of densities and then to find a member of that family which is close to the target density. Closeness is measured by Kullback-Leibler divergence. We review the ideas behind mean-field variational inference, discuss the special case of VI applied to exponential family models, present a full example with a Bayesian mixture of Gaussians, and derive a variant that uses stochastic optimization to scale up to massive data. We discuss modern research in VI and highlight important open problems. VI is powerful, but it is not yet well understood. Our hope in writing this article is to catalyze statistical research on this class of algorithms. Supplementary materials for this article are available online.
Journal Article
The probabilistic model checker Storm
by
Junges, Sebastian
,
Volk, Matthias
,
Hensel, Christian
in
Algorithms
,
Awards & honors
,
Command languages
2022
We present the probabilistic model checker
Storm
.
Storm
supports the analysis of discrete- and continuous-time variants of both Markov chains and Markov decision processes.
Storm
has three major distinguishing features. It supports multiple input languages for Markov models, including the
Jani
and
Prism
modeling languages, dynamic fault trees, generalized stochastic Petri nets, and the probabilistic guarded command language. It has a modular setup in which solvers and symbolic engines can easily be exchanged. Its Python API allows for rapid prototyping by encapsulating
Storm
’s fast and scalable algorithms. This paper reports on the main features of
Storm
and explains how to effectively use them. A description is provided of the main distinguishing functionalities of
Storm
. Finally, an empirical evaluation of different configurations of
Storm
on the QComp 2019 benchmark set is presented.
Journal Article
Communication-Efficient Distributed Statistical Inference
by
Yang, Yun
,
Lee, Jason D.
,
Jordan, Michael I.
in
Algorithms
,
Bayesian analysis
,
Bayesian theory
2019
We present a communication-efficient surrogate likelihood (CSL) framework for solving distributed statistical inference problems. CSL provides a communication-efficient surrogate to the global likelihood that can be used for low-dimensional estimation, high-dimensional regularized estimation, and Bayesian inference. For low-dimensional estimation, CSL provably improves upon naive averaging schemes and facilitates the construction of confidence intervals. For high-dimensional regularized estimation, CSL leads to a minimax-optimal estimator with controlled communication cost. For Bayesian inference, CSL can be used to form a communication-efficient quasi-posterior distribution that converges to the true posterior. This quasi-posterior procedure significantly improves the computational efficiency of Markov chain Monte Carlo (MCMC) algorithms even in a nondistributed setting. We present both theoretical analysis and experiments to explore the properties of the CSL approximation. Supplementary materials for this article are available online.
Journal Article
Scalable importance tempering and Bayesian variable selection
2019
We propose a Monte Carlo algorithm to sample from high dimensional probability distributions that combines Markov chain Monte Carlo and importance sampling. We provide a careful theoretical analysis, including guarantees on robustness to high dimensionality, explicit comparison with standard Markov chain Monte Carlo methods and illustrations of the potential improvements in efficiency. Simple and concrete intuition is provided for when the novel scheme is expected to outperform standard schemes. When applied to Bayesian variable-selection problems, the novel algorithm is orders of magnitude more efficient than available alternative sampling schemes and enables fast and reliable fully Bayesian inferences with tens of thousand regressors.
Journal Article
The Iterated Auxiliary Particle Filter
by
Guarniero, Pieralberto
,
Johansen, Adam M.
,
Lee, Anthony
in
Algorithms
,
Bias
,
Computer simulation
2017
We present an offline, iterated particle filter to facilitate statistical inference in general state space hidden Markov models. Given a model and a sequence of observations, the associated marginal likelihood L is central to likelihood-based inference for unknown statistical parameters. We define a class of \"twisted\" models: each member is specified by a sequence of positive functions
and has an associated
-auxiliary particle filter that provides unbiased estimates of L. We identify a sequence
that is optimal in the sense that the
-auxiliary particle filter's estimate of L has zero variance. In practical applications,
is unknown so the
-auxiliary particle filter cannot straightforwardly be implemented. We use an iterative scheme to approximate
and demonstrate empirically that the resulting iterated auxiliary particle filter significantly outperforms the bootstrap particle filter in challenging settings. Applications include parameter estimation using a particle Markov chain Monte Carlo algorithm.
Journal Article