Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
514
result(s) for
"Variational inference"
Sort by:
Study of Variational Inference for Flexible Distributed Probabilistic Robotics
by
Bak, Thomas
,
Pedersen, Rasmus
,
Damgaard, Malte Rørmose
in
Algorithms
,
Approximation
,
distributed robotics
2022
By combining stochastic variational inference with message passing algorithms, we show how to solve the highly complex problem of navigation and avoidance in distributed multi-robot systems in a computationally tractable manner, allowing online implementation. Subsequently, the proposed variational method lends itself to more flexible solutions than prior methodologies. Furthermore, the derived method is verified both through simulations with multiple mobile robots and a real world experiment with two mobile robots. In both cases, the robots share the operating space and need to cross each other’s paths multiple times without colliding.
Journal Article
Improving the Classification Effectiveness of Intrusion Detection by Using Improved Conditional Variational AutoEncoder and Deep Neural Network
by
Yang, Yanqing
,
Yang, Yixian
,
Zheng, Kangfeng
in
Artificial intelligence
,
deep neural network
,
generator network
2019
Intrusion detection systems play an important role in preventing security threats and protecting networks from attacks. However, with the emergence of unknown attacks and imbalanced samples, traditional machine learning methods suffer from lower detection rates and higher false positive rates. We propose a novel intrusion detection model that combines an improved conditional variational AutoEncoder (ICVAE) with a deep neural network (DNN), namely ICVAE-DNN. ICVAE is used to learn and explore potential sparse representations between network data features and classes. The trained ICVAE decoder generates new attack samples according to the specified intrusion categories to balance the training data and increase the diversity of training samples, thereby improving the detection rate of the imbalanced attacks. The trained ICVAE encoder is not only used to automatically reduce data dimension, but also to initialize the weight of DNN hidden layers, so that DNN can easily achieve global optimization through back propagation and fine tuning. The NSL-KDD and UNSW-NB15 datasets are used to evaluate the performance of the ICVAE-DNN. The ICVAE-DNN is superior to the three well-known oversampling methods in data augmentation. Moreover, the ICVAE-DNN outperforms six well-known models in detection performance, and is more effective in detecting minority attacks and unknown attacks. In addition, the ICVAE-DNN also shows better overall accuracy, detection rate and false positive rate than the nine state-of-the-art intrusion detection methods.
Journal Article
Markov blankets, information geometry and stochastic thermodynamics
2020
This paper considers the relationship between thermodynamics, information and inference. In particular, it explores the thermodynamic concomitants of belief updating, under a variational (free energy) principle for self-organization. In brief, any (weakly mixing) random dynamical system that possesses a Markov blanket—i.e. a separation of internal and external states—is equipped with an information geometry. This means that internal states parametrize a probability density over external states. Furthermore, at non-equilibrium steady-state, the flow of internal states can be construed as a gradient flow on a quantity known in statistics as Bayesian model evidence. In short, there is a natural Bayesian mechanics for any system that possesses a Markov blanket. Crucially, this means that there is an explicit link between the inference performed by internal states and their energetics—as characterized by their stochastic thermodynamics. This article is part of the theme issue ‘Harmonizing energy-autonomous computing and intelligence’.
Journal Article
A simple new approach to variable selection in regression, with application to genetic fine mapping
2020
We introduce a simple new approach to variable selection in linear regression, with a particular focus on quantifying uncertainty in which variables should be selected. The approach is based on a new model—the ‘sum of single effects’ model, called ‘SuSiE’—which comes from writing the sparse vector of regression coefficients as a sum of ‘single-effect’ vectors, each with one non-zero element. We also introduce a corresponding new fitting procedure—iterative Bayesian stepwise selection (IBSS)—which is a Bayesian analogue of stepwise selection methods. IBSS shares the computational simplicity and speed of traditional stepwise methods but, instead of selecting a single variable at each step, IBSS computes a distribution on variables that captures uncertainty in which variable to select. We provide a formal justification of this intuitive algorithm by showing that it optimizes a variational approximation to the posterior distribution under SuSiE. Further, this approximate posterior distribution naturally yields convenient novel summaries of uncertainty in variable selection, providing a credible set of variables for each selection. Our methods are particularly well suited to settings where variables are highly correlated and detectable effects are sparse, both of which are characteristics of genetic fine mapping applications. We demonstrate through numerical experiments that our methods outperform existing methods for this task, and we illustrate their application to fine mapping genetic variants influencing alternative splicing in human cell lines. We also discuss the potential and challenges for applying these methods to generic variable-selection problems.
Journal Article
Probabilistic harmonization and annotation of single‐cell transcriptomics data with deep generative models
2021
As the number of single‐cell transcriptomics datasets grows, the natural next step is to integrate the accumulating data to achieve a common ontology of cell types and states. However, it is not straightforward to compare gene expression levels across datasets and to automatically assign cell type labels in a new dataset based on existing annotations. In this manuscript, we demonstrate that our previously developed method, scVI, provides an effective and fully probabilistic approach for joint representation and analysis of scRNA‐seq data, while accounting for uncertainty caused by biological and measurement noise. We also introduce single‐cell ANnotation using Variational Inference (scANVI), a semi‐supervised variant of scVI designed to leverage existing cell state annotations. We demonstrate that scVI and scANVI compare favorably to state‐of‐the‐art methods for data integration and cell state annotation in terms of accuracy, scalability, and adaptability to challenging settings. In contrast to existing methods, scVI and scANVI integrate multiple datasets with a single generative model that can be directly used for downstream tasks, such as differential expression. Both methods are easily accessible through scvi‐tools.
SYNOPSIS
This study demonstrates the ability of scVI to integrate single‐cell RNA‐seq datasets in a variety of settings and presents scANVI, a new development based on scVI for automated annotation of cell types and states.
In scVI, datasets from different labs and technologies are integrated in a joint latent space.
In scANVI, cell type annotations are transferred between datasets and across different scenarios.
Uncertainties of differential gene expression in multiple samples are quantified.
The performance of scVI and scANVI in data integration and cell state annotation is superior to other related methods.
Graphical Abstract
This study demonstrates the ability of scVI to integrate single‐cell RNA‐seq datasets in a variety of settings and presents scANVI, a new development based on scVI for automated annotation of cell types and states.
Journal Article
α-VARIATIONAL INFERENCE WITH STATISTICAL GUARANTEES
by
Yang, Yun
,
Pati, Debdeep
,
Bhattacharya, Anirban
in
Approximation
,
Bayesian analysis
,
Dirichlet problem
2020
We provide statistical guarantees for a family of variational approximations to Bayesian posterior distributions, called α-VB, which has close connections with variational approximations of tempered posteriors in the literature. The standard variational approximation is a special case of α-VB with α = 1. When α ∈ (0, 1], a novel class of variational inequalities are developed for linking the Bayes risk under the variational approximation to the objective function in the variational optimization problem, implying that maximizing the evidence lower bound in variational inference has the effect of minimizing the Bayes risk within the variational density family. Operating in a frequentist setup, the variational inequalities imply that point estimates constructed from the α-VB procedure converge at an optimal rate to the true parameter in a wide range of problems. We illustrate our general theory with a number of examples, including the mean-field variational approximation to (low)-highdimensional Bayesian linear regression with spike and slab priors, Gaussian mixture models and latent Dirichlet allocation.
Journal Article
CONVERGENCE RATES OF VARIATIONAL POSTERIOR DISTRIBUTIONS
2020
We study convergence rates of variational posterior distributions for nonparametric and high-dimensional inference. We formulate general conditions on prior, likelihood and variational class that characterize the convergence rates. Under similar “prior mass and testing” conditions considered in the literature, the rate is found to be the sum of two terms. The first term stands for the convergence rate of the true posterior distribution, and the second term is contributed by the variational approximation error. For a class of priors that admit the structure of a mixture of product measures, we propose a novel prior mass condition, under which the variational approximation error of the mean-field class is dominated by convergence rate of the true posterior. We demonstrate the applicability of our general results for various models, prior distributions and variational classes by deriving convergence rates of the corresponding variational posteriors.
Journal Article
Stochastic Chaos and Markov Blankets
2021
In this treatment of random dynamical systems, we consider the existence—and identification—of conditional independencies at nonequilibrium steady-state. These independencies underwrite a particular partition of states, in which internal states are statistically secluded from external states by blanket states. The existence of such partitions has interesting implications for the information geometry of internal states. In brief, this geometry can be read as a physics of sentience, where internal states look as if they are inferring external states. However, the existence of such partitions—and the functional form of the underlying densities—have yet to be established. Here, using the Lorenz system as the basis of stochastic chaos, we leverage the Helmholtz decomposition—and polynomial expansions—to parameterise the steady-state density in terms of surprisal or self-information. We then show how Markov blankets can be identified—using the accompanying Hessian—to characterise the coupling between internal and external states in terms of a generalised synchrony or synchronisation of chaos. We conclude by suggesting that this kind of synchronisation may provide a mathematical basis for an elemental form of (autonomous or active) sentience in biology.
Journal Article
THEORETICAL AND COMPUTATIONAL GUARANTEES OF MEAN FIELD VARIATIONAL INFERENCE FOR COMMUNITY DETECTION
2020
The mean field variational Bayes method is becoming increasingly popular in statistics and machine learning. Its iterative coordinate ascent variational inference algorithm has been widely applied to large scale Bayesian inference. See Blei et al. (2017) for a recent comprehensive review. Despite the popularity of the mean field method, there exist remarkably little fundamental theoretical justifications. To the best of our knowledge, the iterative algorithm has never been investigated for any high-dimensional and complex model. In this paper, we study the mean field method for community detection under the stochastic block model. For an iterative batch coordinate ascent variational inference algorithm, we show that it has a linear convergence rate and converges to the minimax rate within log n iterations. This complements the results of Bickel et al. (2013) which studied the global minimum of the mean field variational Bayes and obtained asymptotic normal estimation of global model parameters. In addition, we obtain similar optimality results for Gibbs sampling and an iterative procedure to calculate maximum likelihood estimation, which can be of independent interest.
Journal Article
A representation learning model based on variational inference and graph autoencoder for predicting lncRNA-disease associations
2021
Background
Numerous studies have demonstrated that long non-coding RNAs are related to plenty of human diseases. Therefore, it is crucial to predict potential lncRNA-disease associations for disease prognosis, diagnosis and therapy. Dozens of machine learning and deep learning algorithms have been adopted to this problem, yet it is still challenging to learn efficient low-dimensional representations from high-dimensional features of lncRNAs and diseases to predict unknown lncRNA-disease associations accurately.
Results
We proposed an end-to-end model, VGAELDA, which integrates variational inference and graph autoencoders for lncRNA-disease associations prediction. VGAELDA contains two kinds of graph autoencoders. Variational graph autoencoders (VGAE) infer representations from features of lncRNAs and diseases respectively, while graph autoencoders propagate labels via known lncRNA-disease associations. These two kinds of autoencoders are trained alternately by adopting variational expectation maximization algorithm. The integration of both the VGAE for graph representation learning, and the alternate training via variational inference, strengthens the capability of VGAELDA to capture efficient low-dimensional representations from high-dimensional features, and hence promotes the robustness and preciseness for predicting unknown lncRNA-disease associations. Further analysis illuminates that the designed co-training framework of lncRNA and disease for VGAELDA solves a geometric matrix completion problem for capturing efficient low-dimensional representations via a deep learning approach.
Conclusion
Cross validations and numerical experiments illustrate that VGAELDA outperforms the current state-of-the-art methods in lncRNA-disease association prediction. Case studies indicate that VGAELDA is capable of detecting potential lncRNA-disease associations. The source code and data are available at
https://github.com/zhanglabNKU/VGAELDA
.
Journal Article