Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
61 result(s) for "General Bayesian updating"
Sort by:
A general framework for updating belief distributions
We propose a framework for general Bayesian inference. We argue that a valid update of a prior belief distribution to a posterior can be made for parameters which are connected to observations through a loss function rather than the traditional likelihood function, which is recovered as a special case. Modern application areas make it increasingly challenging for Bayesians to attempt to model the true data-generating mechanism. For instance, when the object of interest is low dimensional, such as a mean or median, it is cumbersome to have to achieve this via a complete model for the whole data distribution. More importantly, there are settings where the parameter of interest does not directly index a family of density functions and thus the Bayesian approach to learning about such parameters is currently regarded as problematic. Our framework uses loss functions to connect information in the data to functionals of interest. The updating of beliefs then follows from a decision theoretic approach involving cumulative loss functions. Importantly, the procedure coincides with Bayesian updating when a true likelihood is known yet provides coherent subjective inference in much more general settings. Connections to other inference frameworks are highlighted.
General Bayesian updating and the loss-likelihood bootstrap
In this paper we revisit the weighted likelihood bootstrap, a method that generates samples from an approximate Bayesian posterior of a parametric model. We show that the same method can be derived, without approximation, under a Bayesian nonparametric model with the parameter of interest defined through minimizing an expected negative loglikelihood under an unknown sampling distribution. This interpretation enables us to extend the weighted likelihood bootstrap to posterior sampling for parameters minimizing an expected loss. We call this method the loss-likelihood bootstrap, and we make a connection between it and general Bayesian updating, which is a way of updating prior belief distributions that does not need the construction of a global probability model, yet requires the calibration of two forms of loss function. The loss-likelihood bootstrap is used to calibrate the general Bayesian posterior by matching asymptotic Fisher information. We demonstrate the proposed method on a number of examples.
Ambiguity and partial Bayesian updating
Models of updating a set of priors either do not allow a decision maker to make inference about her priors (full bayesian updating or FB) or require an extreme degree of selection (maximum likelihood updating or ML). I characterize a general method for updating a set of priors, partial bayesian updating (PB), in which the decision maker (1) utilizes an event-dependent threshold to determine whether a prior is likely enough, conditional on observed information, and then (2) applies Bayes’ rule to the sufficiently likely priors. I show that PB nests FB and ML and explore its behavioral properties.
Pseudo-Bayesian updating
I propose an axiomatic framework for belief revision when new information is qualitative, of the form \"event A is at least as likely as event B.\" My decision maker need not have beliefs about the joint distribution of the signal she will receive and the payoff-relevant states. I propose three axioms, Exchangeability, Stationarity, and Reduction, to characterize the class of pseudo-Bayesian updating rules. The key axiom, Exchangeability, requires that the order in which the information arrives does not matter if the different pieces of information neither reinforce nor contradict each other. I show that adding one more axiom, Conservatism, which requires that the decision maker adjust her beliefs just enough to embrace new information, yields Kullback-Leibler minimization: The decision maker selects the posterior closest to her prior in terms of Kullback-Leibler divergence from the probability measures consistent with newly received information. I show that pseudo-Bayesian agents are susceptible to recency bias, which may be mitigated by repetitive learning.
How much do we learn? Measuring symmetric and asymmetric deviations from Bayesian updating through choices
Belief-updating biases hinder the correction of inaccurate beliefs and lead to suboptimal decisions. We complement Rabin and Schrag's (1999) portable extension of the Bayesian model by including conservatism in addition to confirmatory bias. Additionally, we show how to identify these two forms of biases from choices. In an experiment, we found that the subjects exhibited confirmatory bias by misreading 19% of the signals that contradicted their priors. They were also conservative and acted as if they missed 28% of the signals.
A Bayesian approach to develop simple run-out distance models: loess landslides in Heifangtai Terrace, Gansu Province, China
Due to their practicability, simple and data-driven empirical models have been extensively developed and applied in practical engineering to predict the run-out distance of landslides. However, the definition of the most appropriate empirical model for specific landslide data is rarely discussed. Moreover, the empirical model is subjected to the high variability of landslide data, which should be quantified into the model to provide more reliable predictions. As such, we propose in this paper, a simple, practical, and probabilistic run-out distance prediction method based on an empirical model and Bayesian method. This method was implemented with a regional landslide inventory compiled from 34 loess landslides in Heifangtai Terrace, Gansu Province, China. In this method, we performed a Bayesian model selection to determine the most appropriate empirical model for the compiled database among the possible candidate models adapted from previous literature. Considering the high variability of data, unknown parameters of the empirical model are regarded as random variables, and their posterior distributions are obtained by Bayesian updating with the compiled database. Then, we developed the probabilistic run-out prediction model to evaluate the run-out distance exceedance probability of landslides based on the most appropriate model and its associated posterior random variable information. We utilized data from two recent landslides that occurred in Heifangtai Terrace to validate the performance of the proposed model. In addition, we produced a run-out distance exceedance probability curve using the proposed method for a potential landslide in Heifangtai Terrace, in which the sliding volume interval is estimated using the sloping local base level (SLBL) method. In general, this study presents a practical method for landslide run-out distance analyses within a probabilistic framework, aiming to provide support for risk-based decisions.
Quantifying the sampling error on burn counts in Monte-Carlo wildfire simulations using Poisson and Gamma distributions
This article provides a precise, quantitative description of the sampling error on burn counts in Monte-Carlo wildfire simulations - that is, the prediction variability introduced by the fact that the set of simulated fires is random and finite. We show that the marginal burn counts are (very nearly) Poisson-distributed in typical settings and infer through Bayesian updating that Gamma distributions are suitable summaries of the remaining uncertainty. In particular, the coefficient of variation of the burn count is equal to the inverse square root of its expected value, and this expected value is proportional to the number of simulated fires multiplied by the asymptotic burn probability. From these results, we derive practical guidelines for choosing the number of simulated fires and estimating the sampling error. Notably, the required number of simulated years is expressed as a power law. Such findings promise to relieve fire modelers of resource-consuming iterative experiments for sizing simulations and assessing their convergence: statistical theory provides better answers, faster.
How do consumers respond to COVID-19? Application of Bayesian approach on credit card transaction data
Determining how consumers respond to unexpected outbreaks has been one of the core research areas in risk analysis. Using the case of the COVID-19 pandemic, this study estimates consumption behavior and pays significant attention to understanding the information updating process of consumers regarding the spread of the pandemic. We propose four different models of information updating: the naïve expectation, adaptive expectation, perfect and non-perfect Bayesian models. Using the real-time credit card transactions in Taiwan, we find that consumers respond to the spread of COVID-19 confirmed cases in the way predicted by the perfect Bayesian model. Moreover, we find that COVID-19 increases consumers’ expenditure on clothing and transportation in offline markets. With respect to food consumption, we find a decrease in offline and an increased expenditure in online markets. Our findings are robust to different measurements of COVID-19 spread.
The Unification of Evolutionary Dynamics through the Bayesian Decay Factor in a Game on a Graph
We unify evolutionary dynamics on graphs in strategic uncertainty through a decaying Bayesian update. Our analysis focuses on the Price theorem of selection, which governs replicator(-mutator) dynamics, based on a stratified interaction mechanism and a composite strategy update rule. Our findings suggest that the replication of a certain mutation in a strategy, leading to a shift from competition to cooperation in a well-mixed population, is equivalent to the replication of a strategy in a Bayesian-structured population without any mutation. Likewise, the replication of a strategy in a Bayesian-structured population with a certain mutation, resulting in a move from competition to cooperation, is equivalent to the replication of a strategy in a well-mixed population without any mutation. This equivalence holds when the transition rate from competition to cooperation is equal to the relative strength of selection acting on either competition or cooperation in relation to the selection differential between cooperators and competitors. Our research allows us to identify situations where cooperation is more likely, irrespective of the specific payoff levels. This approach provides new perspectives into the intended purpose of Price’s equation, which was initially not designed for this type of analysis.
Bayesian Updating Methodology for the Post-fire Evaluation of the Maximum Temperature Profile Inside Concrete Elements
The post-fire assessment of concrete structures is a complex task that requires the integration of multiple measurements from different techniques. The current approach to integrate information from different sources relies mainly on expert judgement, meaning that no explicit consideration is given to the precision of different techniques. This paper presents a Bayesian updating methodology that integrates information from different sources about the maximum temperature the concrete experienced during fire exposure at a certain depth, such as discoloration and rebound hammer measurements, by considering the uncertainties and errors associated with measurements. The data is then used to update the prior information on the uncertain parameters of interest, here the fire load density and opening factor. These updated distributions provide a better estimate of the fire exposure, thermal and damage gradient and the residual condition of the structure. The proof-of-concept and effectiveness of the proposed methodology are demonstrated through a case study. The results show that the proposed methodology is able to effectively incorporate the uncertainties and errors associated with the assessment techniques, producing more reliable estimates of the fire severity. This method has the potential to improve the post-fire assessment process and provide more accurate information for the rehabilitation of concrete structures.