Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
1,127
result(s) for
"prior distribution"
Sort by:
A Weakly Informative Default Prior Distribution for Logistic and Other Regression Models
by
Gelman, Andrew
,
Pittau, Maria Grazia
,
Su, Yu-Sung
in
Bayesian inference
,
Datasets
,
Distribution theory
2008
We propose a new prior distribution for classical (nonhierarchical) logistic regression models, constructed by first scaling all nonbinary variables to have mean 0 and standard deviation 0.5, and then placing independent Student-t prior distributions on the coefficients. As a default choice, we recommend the Cauchy distribution with center 0 and scale 2.5, which in the simplest setting is a longer-tailed version of the distribution attained by assuming one-half additional success and one-half additional failure in a logistic regression. Cross-validation on a corpus of datasets shows the Cauchy class of prior distributions to outperform existing implementations of Gaussian and Laplace priors. We recommend this prior distribution as a default choice for routine applied use. It has the advantage of always giving answers, even when there is complete separation in logistic regression (a common problem, even when the sample size is large and the number of predictors is small), and also automatically applying more shrinkage to higher-order interactions. This can be useful in routine data analysis as well as in automated procedures such as chained equations for missing-data imputation. We implement a procedure to fit generalized linear models in R with the Student-t prior distribution by incorporating an approximate EM algorithm into the usual iteratively weighted least squares. We illustrate with several applications, including a series of logistic regressions predicting voting preferences, a small bioassay experiment, and an imputation model for a public health data set.
Journal Article
Bayesian approach for a 2 x 2 crossover design with repeated measures: a simulation study
by
Beijo, Luiz Alberto
,
Lopez, Yaciled Miranda
,
Nogueira, Denismar Alves
in
Bayesian analysis
,
carryover effects; longitudinal data; MCMC; prior distribution
,
Estimates
2024
In crossover designs, the subjects receive all treatments, according to the groups of sequences formed. Therefore, if carryover effects are present in the model, inferences about the treatments effects become difficult. Furthermore, repeated measures of the response variable can be taken over time in the same experimental unit; however, these measures may be correlated. In this way, we aimed to analyze a 2 x 2 crossover design with repeated measures within the treatment period, using a Bayesian approach. A simulation study was performed to evaluate the performance. The posterior estimates of the model parameters were obtained under non-informative prior distributions and the normal likelihood function. The model performed well with a sample size of 20 subjects, showing even better results with samples of 100 subjects. With larger samples, exact tests for the differences in carryover effects and time effects were obtained. However, the test of time effect proved to be powerful even with small samples. In turn, considering carryover effects different from zero did not influence the estimates of treatment differences, although biased estimates of the period effect were obtained.
Journal Article
The Prior Can Often Only Be Understood in the Context of the Likelihood
by
Simpson, Daniel
,
Gelman, Andrew
,
Betancourt, Michael
in
Bayesian analysis
,
Bayesian inference
,
default priors
2017
A key sticking point of Bayesian analysis is the choice of prior distribution, and there is a vast literature on potential defaults including uniform priors, Jeffreys’ priors, reference priors, maximum entropy priors, and weakly informative priors. These methods, however, often manifest a key conceptual tension in prior modeling: a model encoding true prior information should be chosen without reference to the model of the measurement process, but almost all common prior modeling techniques are implicitly motivated by a reference likelihood. In this paper we resolve this apparent paradox by placing the choice of prior into the context of the entire Bayesian analysis, from inference to prediction to model evaluation.
Journal Article
Penalising Model Component Complexity: A Principled, Practical Approach to Constructing Priors
by
Martins, Thiago G.
,
Rue, Håvard
,
Simpson, Daniel
in
Bayesian analysis
,
Complexity
,
Data models
2017
In this paper, we introduce a new concept for constructing prior distributions. We exploit the natural nested structure inherent to many model components, which defines the model component to be a flexible extension of a base model. Proper priors are defined to penalise the complexity induced by deviating from the simpler base model and are formulated after the input of a user-defined scaling parameter for that model component, both in the univariate and the multivariate case. These priors are invariant to reparameterisations, have a natural connection to Jeffreys' priors, are designed to support Occam's razor and seem to have excellent robustness properties, all which are highly desirable and allow us to use this approach to define default prior distributions. Through examples and theoretical results, we demonstrate the appropriateness of this approach and how it can be applied in various situations.
Journal Article
Incorporating Bayesian Ideas into Health-Care Evaluation
2004
We argue that the Bayesian approach is best seen as providing additional tools for those carrying out health-care evaluations, rather than replacing their traditional methods. A distinction is made between those features that arise from the basic Bayesian philosophy and those that come from the modern ability to make inferences using very complex models. Selected examples of the former include explicit recognition of the wide cast of stakeholders in any evaluation, simple use of Bayes theorem and use of a community of prior distributions. In the context of complex models, we selectively focus on the possible role of simple Monte Carlo methods, alternative structural models for incorporating historical data and making inferences on complex functions of indirectly estimated parameters. These selected issues are illustrated by two worked examples presented in a standardized format. The emphasis throughout is on inference rather than decision-making.
Journal Article
The 3-component mixture of power distributions under Bayesian paradigm with application of life span of fatigue fracture
2024
Mixture distributions are naturally extra attractive to model the heterogeneous environment of processes in reliability analysis than simple probability models. This focus of the study is to develop and Bayesian inference on the 3-component mixture of power distributions. Under symmetric and asymmetric loss functions, the Bayes estimators and posterior risk using priors are derived. The presentation of Bayes estimators for various sample sizes and test termination time (a fact of time after that test is terminated) is examined in this article. To assess the performance of Bayes estimators in terms of posterior risks, a Monte Carlo simulation along with real data study is presented.
Journal Article
Expert agreement in prior elicitation and its effects on Bayesian inference
by
Gronau, Quentin F.
,
Stefan, Angelika M.
,
Katsimpokis, Dimitris
in
Bayes Theorem
,
Bayesian analysis
,
Behavioral Science and Psychology
2022
Bayesian inference requires the specification of prior distributions that quantify the pre-data uncertainty about parameter values. One way to specify prior distributions is through prior elicitation, an interview method guiding field experts through the process of expressing their knowledge in the form of a probability distribution. However, prior distributions elicited from experts can be subject to idiosyncrasies of experts and elicitation procedures, raising the spectre of subjectivity and prejudice. Here, we investigate the effect of interpersonal variation in elicited prior distributions on the Bayes factor hypothesis test. We elicited prior distributions from six academic experts with a background in different fields of psychology and applied the elicited prior distributions as well as commonly used default priors in a re-analysis of 1710 studies in psychology. The degree to which the Bayes factors vary as a function of the different prior distributions is quantified by three measures of concordance of evidence: We assess whether the prior distributions change the Bayes factor direction, whether they cause a switch in the category of evidence strength, and how much influence they have on the value of the Bayes factor. Our results show that although the Bayes factor is sensitive to changes in the prior distribution, these changes do not necessarily affect the qualitative conclusions of a hypothesis test. We hope that these results help researchers gauge the influence of interpersonal variation in elicited prior distributions in future psychological studies. Additionally, our sensitivity analyses can be used as a template for Bayesian robustness analyses that involve prior elicitation from multiple experts.
Journal Article
Uncertainty Evaluation Based on Bayesian Transformations: Taking Facies Proportion as An Example
by
Li, Shaohua
,
Qiao, Yangming
,
Li, Wanbing
in
Bayesian transformation
,
facies proportion
,
Forecasts and trends
2023
Many input parameters in reservoir modeling cannot be uniquely determined due to the incompleteness of data and the heterogeneity of the reservoir. Sedimentary facies modeling is a crucial part of reservoir modeling. The facies proportion is an important parameter affecting the modeling results, because that proportion directly determines the net gross ratio, reserves and sandbody connectivity. An uncertainty evaluation method based on Bayesian transformation is proposed to reduce the uncertainty of the facies proportion. According to the existing data and geological knowledge, the most probable value of the facies ratio and the prior distribution of uncertainty are estimated. The prior distribution of the facies proportion is divided into several intervals, and the proportions contained in each interval are used in facies modeling. Then, spatial resampling is carried out for each realization to obtain the likelihood estimation of the facies proportion. Finally, the posterior distribution of the facies ratio is achieved based on Bayesian transformation. The case study shows that the uncertainty interval of sandstone proportion in the study area has been reduced from [0.31, 0.59] to [0.35, 0.55], with a range reduction of 29%, indicating that the updated posterior distribution reduces the uncertainty of reservoir lithofacies proportion, thereby reducing the uncertainty of modeling results.
Journal Article
Variational graph neural network with diffusion prior for link prediction
by
Li, Zhipeng
,
Yuan, Chang-An
,
Huang, De-Shuang
in
Coders
,
Diffusion layers
,
Graph neural networks
2025
Recently, Graph neural networks(GNNs) has achieved tremendous success in a variety of fields. Many approaches have been proposed to address data with graph structure. However, many of these are deterministic methods, therefore, they are unable to capture the uncertainty, which is inherent in the nature of graph data. Various VAE(Variational auto-encoder)-based approaches have been proposed to tackle such problems. Unfortunately, due to the simple a posterior and a prior assumption problems of such methods, they are not well suited to handle uncertainty in graph data. For example, VGAE(Variational graph auto-encoder) assumes that the posterior and prior distributions are simple Gaussian distributions, which can lead to overfitting problems when incompatible with the true distributions. Many methods propose to solve the posterior distribution problem, but most ignore the effect of the prior distribution. Therefore, in this paper, we proposed a novel method to solve the Gaussian prior problem. Specifically, in order to enhance the representation power of the prior distribution, we use the diffusion model to model the prior distribution. We incorporate the diffusion model into VGAE. In the forward diffusion process, noise is gradually added to the latent variables, and then the samples are recovered by the backward diffusion process. To realize the backward diffusion process, we propose a new denoising model which predicts noise by stacking GCN(Graph Convolution Network) and MLP(Multi-layers Perceptron). We perform experiments on different datasets and the experimental results demonstrate that our method obtains state-of-the-art results.
Journal Article
Ensuring identifiability in hierarchical mixed effects Bayesian models
2020
Ecologists are increasingly familiar with Bayesian statistical modeling and its associated Markov chain Monte Carlo (MCMC) methodology to infer about or to discover interesting effects in data. The complexity of ecological data often suggests implementation of (statistical) models with a commensurately rich structure of effects, including crossed or nested (i.e., hierarchical or multi-level) structures of fixed and/or random effects. Yet, our experience suggests that most ecologists are not familiar with subtle but important problems that often arise with such models and with their implementation in popular software. Of foremost consideration for us is the notion of effect identifiability, which generally concerns how well data, models, or implementation approaches inform about, i.e., identify, quantities of interest. In this paper, we focus on implementation pitfalls that potentially misinform subsequent inference, despite otherwise informative data and models. We illustrate the aforementioned issues using random effects regressions on synthetic data. We show how to diagnose identifiability issues and how to remediate these issues with model reparameterization and computational and/or coding practices in popular software, with a focus on JAGS, OpenBUGS, and Stan. We also show how these solutions can be extended to more complex models involving multiple groups of nested, crossed, additive, or multiplicative effects, for models involving random and/or fixed effects. Finally, we provide example code (JAGS/OpenBUGS and Stan) that practitioners can modify and use for their own applications.
Journal Article