Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
90 result(s) for "Bayes, Thomas, -1761."
Sort by:
The Theory That Would Not Die
Bayes' rule appears to be a straightforward, one-line theorem: by updating our initial beliefs with objective new information, we get a new and improved belief. To its adherents, it is an elegant statement about learning from experience. To its opponents, it is subjectivity run amok. In the first-ever account of Bayes' rule for general readers, Sharon Bertsch McGrayne explores this controversial theorem and the human obsessions surrounding it. She traces its discovery by an amateur mathematician in the 1740s through its development into roughly its modern form by French scientist Pierre Simon Laplace. She reveals why respected statisticians rendered it professionally taboo for 150 years-at the same time that practitioners relied on it to solve crises involving great uncertainty and scanty information (Alan Turing's role in breaking Germany's Enigma code during World War II), and explains how the advent of off-the-shelf computer technology in the 1980s proved to be a game-changer. Today, Bayes' rule is used everywhere from DNA de-coding to Homeland Security. Drawing on primary source material and interviews with statisticians and other scientists,The Theory That Would Not Dieis the riveting account of how a seemingly simple theorem ignited one of the greatest controversies of all time.
Formal models of source reliability
The paper introduces, compares and contrasts formal models of source reliability proposed in the epistemology literature, in particular the prominent models of Bovens and Hartmann (Bayesian epistemology, Oxford University Press, Oxford, 2003) and Olsson (Episteme 8(02):127–143, 2011). All are Bayesian models seeking to provide normative guidance, yet they differ subtly in assumptions and resulting behavior. Models are evaluated both on conceptual grounds and through simulations, and the relationship between models is clarified. The simulations both show surprising similarities and highlight relevant differences between these models. Most importantly, however, our evaluations reveal that important normative concerns arguably remain unresolved. The philosophical implications of this for testimony are discussed.
Bayesian Updating When What You Learn Might Be False (Forthcoming in Erkenntnis)
Rescorla (Erkenntnis, 2020) has recently pointed out that the standard arguments for Bayesian Conditionalization assume that whenever I become certain of something, it is true. Most people would reject this assumption. In response, Rescorla offers an improved Dutch Book argument for Bayesian Conditionalization that does not make this assumption. My purpose in this paper is two-fold. First, I want to illuminate Rescorla’s new argument by giving a very general Dutch Book argument that applies to many cases of updating beyond those covered by Conditionalization, and then showing how Rescorla’s version follows as a special case of that. Second, I want to show how to generalise R. A. Briggs and Richard Pettigrew’s Accuracy Dominance argument to avoid the assumption that Rescorla has identified (Briggs and Pettigrew in Noûs, 2018). In both cases, these arguments proceed by first establishing a very general reflection principle.
A Review on Data-Driven Quality Prediction in the Production Process with Machine Learning for Industry 4.0
The quality-control process in manufacturing must ensure the product is free of defects and performs according to the customer’s expectations. Maintaining the quality of a firm’s products at the highest level is very important for keeping an edge over the competition. To maintain and enhance the quality of their products, manufacturers invest a lot of resources in quality control and quality assurance. During the assembly line, parts will arrive at a constant interval for assembly. The quality criteria must first be met before the parts are sent to the assembly line where the parts and subparts are assembled to get the final product. Once the product has been assembled, it is again inspected and tested before it is delivered to the customer. Because manufacturers are mostly focused on visual quality inspection, there can be bottlenecks before and after assembly. The manufacturer may suffer a loss if the assembly line is slowed down by this bottleneck. To improve quality, state-of-the-art sensors are being used to replace visual inspections and machine learning is used to help determine which part will fail. Using machine learning techniques, a review of quality assessment in various production processes is presented, along with a summary of the four industrial revolutions that have occurred in manufacturing, highlighting the need to detect anomalies in assembly lines, the need to detect the features of the assembly line, the use of machine learning algorithms in manufacturing, the research challenges, the computing paradigms, and the use of state-of-the-art sensors in Industry 4.0.
Northern bobwhite select for shrubby thickets interspersed in grasslands during fall and winter
Resource selection is a key component in understanding the ecological processes underlying population dynamics, particularly for species such as northern bobwhite ( Colinus virginianus ), which are declining across their range in North America. There is a growing body of literature quantifying breeding season resource selection in bobwhite; however, winter information is particularly sparse despite it being a season of substantial mortality. Information regarding winter resource selection is necessary to quantify the extent to which resource requirements are driving population change. We modeled bobwhite fall and winter resource selection as a function of vegetation structure, composition, and management from traditionally (intensively) managed sites and remnant (extensively managed) grassland sites in southwest Missouri using multinomial logit discrete choice models in a Bayesian framework. We captured 158 bobwhite from 67 unique coveys and attached transmitters to 119 individuals. We created 671 choice sets comprised of 1 used location and 3 available locations. Bobwhite selected for locations which were closer to trees during the winter; the relative probability of selection decreased from 0.45 (85% Credible Interval [CRI]: 0.17–0.74) to 0.00 (85% CRI: 0.00–0.002) as distance to trees ranged from 0–313 m. The relative probability of selection increased from near 0 (85% CRI: 0.00–0.01) to 0.33 (85% CRI: 0.09–0.56) and from near 0 (85% CRI: 0.00–0.00) to 0.51 (85% CRI: 0.36–0.71) as visual obstruction increased from 0 to 100% during fall and winter, respectively. Bobwhite also selected locations with more woody stems; the relative probability of selection increased from near 0.00 (85% CRI: 0.00–0.002) to 0.30 (85% CRI: 0.17–0.46) and near 0.00 (85% CRI: 0.00–0.001) to 0.35 (85% CRI: 0.22–0.55) as stem count ranged from 0 to 1000 stems in fall and winter, respectively. The relative probability of selection also decreased from 0.35 (85% CRI: 0.20–0.54) to nearly 0 (85% CRI: 0.00–0.001) as percent grass varied from 0 to 100% in fall. We suggest that dense shrub cover in close proximity to native grasslands is an important component of fall and winter cover given bobwhite selection of shrub cover and previously reported survival benefits in fall and winter.
Bayesian defeat of certainties
When P ( E ) > 0, conditional probabilities P ( H | E ) are given by the ratio formula. An agent engages in ratio conditionalization when she updates her credences using conditional probabilities dictated by the ratio formula. Ratio conditionalization cannot eradicate certainties, including certainties gained through prior exercises of ratio conditionalization. An agent who updates her credences only through ratio conditionalization risks permanent certainty in propositions against which she has overwhelming evidence. To avoid this undesirable consequence, I argue that we should supplement ratio conditionalization with Kolmogorov conditionalization , a strategy for updating credences based on propositions E such that P ( E ) = 0. Kolmogorov conditionalization can eradicate certainties, including certainties gained through prior exercises of conditionalization. Adducing general theorems and detailed examples, I show that Kolmogorov conditionalization helps us model epistemic defeat across a wide range of circumstances.
Bayesian demography 250 years after Bayes
Bayesian statistics offers an alternative to classical (frequentist) statistics. It is distinguished by its use of probability distributions to describe uncertain quantities, which leads to elegant solutions to many difficult statistical problems. Although Bayesian demography, like Bayesian statistics more generally, is around 250 years old, only recently has it begun to flourish. The aim of this paper is to review the achievements of Bayesian demography, address some misconceptions, and make the case for wider use of Bayesian methods in population studies. We focus on three applications: demographic forecasts, limited data, and highly structured or complex models. The key advantages of Bayesian methods are the ability to integrate information from multiple sources and to describe uncertainty coherently. Bayesian methods also allow for including additional (prior) information next to the data sample. As such, Bayesian approaches are complementary to many traditional methods, which can be productively re-expressed in Bayesian terms.
Polarization in groups of Bayesian agents
In this paper we present the results of a simulation study of credence developments in groups of communicating Bayesian agents, as they update their beliefs about a given proposition p. Based on the empirical literature, one would assume that these groups of rational agents would converge on a view over time, or at least that they would not polarize. This paper presents and discusses surprising evidence that this is not true. Our simulation study shows that these groups of Bayesian agents show group polarization behavior under a broad range of circumstances. This is, we think, an unexpected result, that raises deeper questions about whether the kind of polarization in question is irrational. If one accepts Bayesian agency as the hallmark of epistemic rationality, then one should infer that the polarization we find is also rational. On the other hand, if we are inclined to think that there is something epistemically irrational about group polarization, then something must be off in the model employed in our simulation study. We discuss several possible interfering factors, including how epistemic trust is defined in the model. Ultimately, we propose that the notion of Bayesian agency is missing something in general, namely the ability to respond to higher-order evidence.
Child Abuse, Misdiagnosed by an Expertise Center—Part II—Misuse of Bayes’ Theorem
A newborn girl had, from two weeks on, small bruises on varying body locations, but not on her chest. Her Armenian grandmother easily bruised, too. Her mother was diagnosed with hypermobility-type Ehlers-Danlos-Syndrome (hEDS), an autosomal dominant connective tissue disorder, with a 50% inheritance probability. Referral to a University Medical Center located “Dutch Expertise Center for Child Abuse” resulted (prior to consultation) in physical abuse suspicion. Protocol-based skeletal X-rays showed three healed, asymptomatic rib fractures. A protocol-based Bayesian likelihood ratio guesstimation gave 10–100, erroneously used to suggest a 10–100 times likelier non-accidental-than-accidental cause. Foster care placement followed, even in a secret home, where she also bruised, suggesting hEDS inheritance. Correct non-accidental/accidental Bayes’ probability of symptoms is (likelihood ratio) × (physical abuse incidence). From the literature, we derived an infant abuse incidence between about ≈0.0009 and ≈0.0026 and a likelihood ratio of <5 for bruises. For rib fractures, we used a zero likelihood ratio, arguing their cause was birth trauma from the extra delivery pressure on the chest, combined with fragile bones as the daughter of an hEDS-mother. We thus derived a negligible abuse/accidental probability between <5 × 0.0009 <0.005 and <5 × 0.0026 <0.013. The small abuse incidence implies that correctly using Bayes’ theorem will also miss true infant physical abuse cases. Curiously, because likelihood ratios assess how more often symptoms develop if abuse did occur versus non-abuse, Bayes’ theorem then implies a 100% infant abuse incidence (unwittingly) used by LECK. In conclusion, probabilities should never replace differential diagnostic procedures, the accepted medical method of care. Well-known from literature, supported by the present case, is that (child abuse pediatrics) physicians, child protection workers, and judges were unlikely to understand Bayesian statistics. Its use without statistics consultation should therefore not have occurred. Thus, Bayesian statistics, and certainly (misused) likelihood ratios, should never be applied in cases of physical child abuse suspicion. Finally, parental innocence follows from clarifying what could have caused the girl’s bruises (inherited hEDS), and rib fractures (birth trauma from fragile bones).
On the Bayesian Estimation of Synthesized Randomized Response Techniques for Obtaining Sensitive Information
The reduction of response bias in survey research is crucial ensuring that the collected data accurately represents the target population. In this study, the Bayesian Estimation of the Synthesized Random Response Technique (BESRRT) estimators are proposed as an effective method for minimizing response bias. The BESRRT estimators are being expressed using different priors, such as the Kumaraswamy, Generalized Beta, and Beta‐Nakagami distributions. The study employs numerical data investigation and preanalyzed data to compare the performance of the proposed estimators with other conventional models and assess the efficiency of the proposed technique. The results indicate that the BESRRT estimators, particularly the Beta‐Nakagami Distribution prior estimators, outperform other estimators and can potentially improve the accuracy of survey data for informed decision‐making. Consequently, the study concludes that the proposed method is more effective in reducing response bias in surveys involving sensitive information.