Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Reading LevelReading Level
-
Content TypeContent Type
-
YearFrom:-To:
-
More FiltersMore FiltersItem TypeIs Full-Text AvailableSubjectCountry Of PublicationPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
30,901
result(s) for
"bayes"
Sort by:
The Theory That Would Not Die
by
Sharon Bertsch Mcgrayne
in
Bayesian statistical decision theory
,
Bayesian statistical decision theory -- History
,
History
2011
Bayes' rule appears to be a straightforward, one-line theorem: by updating our initial beliefs with objective new information, we get a new and improved belief. To its adherents, it is an elegant statement about learning from experience. To its opponents, it is subjectivity run amok.
In the first-ever account of Bayes' rule for general readers, Sharon Bertsch McGrayne explores this controversial theorem and the human obsessions surrounding it. She traces its discovery by an amateur mathematician in the 1740s through its development into roughly its modern form by French scientist Pierre Simon Laplace. She reveals why respected statisticians rendered it professionally taboo for 150 years-at the same time that practitioners relied on it to solve crises involving great uncertainty and scanty information (Alan Turing's role in breaking Germany's Enigma code during World War II), and explains how the advent of off-the-shelf computer technology in the 1980s proved to be a game-changer. Today, Bayes' rule is used everywhere from DNA de-coding to Homeland Security.
Drawing on primary source material and interviews with statisticians and other scientists,The Theory That Would Not Dieis the riveting account of how a seemingly simple theorem ignited one of the greatest controversies of all time.
The theory that would not die : how Bayes' rule cracked the enigma code, hunted down Russian submarines, & emerged triumphant from two centuries of controversy
\"Bayes' rule appears to be a straightforward, one-line theorem: by updating our initial beliefs with objective new information, we get a new and improved belief. To its adherents, it is an elegant statement about learning from experience. To its opponents, it is subjectivity run amok. In the first-ever account of Bayes' rule for general readers, Sharon Bertsch McGrayne explores this controversial theorem and the human obsessions surrounding it. She traces its discovery by an amateur mathematician in the 1740s through its development into roughly its modern form by French scientist Pierre Simon Laplace. She reveals why respected statisticians rendered it professionally taboo for 150 years--at the same time that practitioners relied on it to solve crises involving great uncertainty and scanty information, even breaking Germany's Enigma code during World War II, and explains how the advent of off-the-shelf computer technology in the 1980s proved to be a game-changer. Today, Bayes' rule is used everywhere from DNA de-coding to Homeland Security. Drawing on primary source material and interviews with statisticians and other scientists, The Theory That Would Not Die is the riveting account of how a seemingly simple theorem ignited one of the greatest controversies of all time.\"-- Provided by publisher.
Bayesian population analysis using WinBUGS : a hierarchical perspective
by
Schaub, Michael
,
Beissinger, Steven R.
,
Kéry, Marc
in
Data processing
,
Population biology
,
Population biology -- Data processing
2012,2011
Bayesian statistics has exploded into biology and its sub-disciplines, such as ecology, over the past decade. The free software program WinBUGS, and its open-source sister OpenBugs, is currently the only flexible and general-purpose program available with which the average ecologist can conduct standard and non-standard Bayesian statistics. Comprehensive and richly commented examples illustrate a wide range of models that are most relevant to the research of a modern population ecologistAll WinBUGS/OpenBUGS analyses are completely integrated in software RIncludes complete documentation of all R and WinBUGS code required to conduct analyses and shows all the necessary steps from having the data in a text file out of Excel to interpreting and processing the output from WinBUGS in R
Everything is predictable : how Bayes' remarkable theorem explains the world
by
Chivers, Tom (Science writer), author
in
Bayes, Thomas, -1761.
,
Bayesian statistical decision theory.
,
Forecasting Statistical methods.
2024
Thomas Bayes was an eighteenth-century Presbyterian minister and amateur mathematician whose obscure life belied the profound impact of his work. Like most research into probability at the time, his theorem was mainly seen as relevant to games of chance, like dice and cards. But its implications soon became clear, affecting fields as diverse as medicine, law and artificial intelligence. Bayes' theorem helps explain why highly accurate screening tests can lead to false positives, causing unnecessary anxiety for patients. A failure to account for it in court has put innocent people in jail. But its influence goes far beyond practical applications. Fusing biography, razor-sharp science communication and intellectual history, 'Everything Is Predictable' is a captivating tour of Bayes' theorem and its impact on modern life.
Bayesian inference for psychology. Part II: Example applications with JASP
by
Derks, Koen
,
Matzke, Dora
,
Marsman, Maarten
in
Bayes Theorem
,
Bayesian analysis
,
Behavioral Science and Psychology
2018
Bayesian hypothesis testing presents an attractive alternative to
p
value hypothesis testing. Part I of this series outlined several advantages of Bayesian hypothesis testing, including the ability to quantify evidence and the ability to monitor and update this evidence as data come in, without the need to know the intention with which the data were collected. Despite these and other practical advantages, Bayesian hypothesis tests are still reported relatively rarely. An important impediment to the widespread adoption of Bayesian tests is arguably the lack of user-friendly software for the run-of-the-mill statistical problems that confront psychologists for the analysis of almost every experiment: the
t
-test, ANOVA, correlation, regression, and contingency tables. In Part II of this series we introduce JASP (
http://www.jasp-stats.org
), an open-source, cross-platform, user-friendly graphical software package that allows users to carry out Bayesian hypothesis tests for standard statistical problems. JASP is based in part on the Bayesian analyses implemented in Morey and Rouder’s BayesFactor package for R. Armed with JASP, the practical advantages of Bayesian hypothesis testing are only a mouse click away.
Journal Article
Bayesian model selection for group studies
by
Penny, Will D.
,
Moran, Rosalyn J.
,
Daunizeau, Jean
in
Algorithms
,
Approximation
,
Bayes factor
2009
Bayesian model selection (BMS) is a powerful method for determining the most likely among a set of competing hypotheses about the mechanisms that generated observed data. BMS has recently found widespread application in neuroimaging, particularly in the context of dynamic causal modelling (DCM). However, so far, combining BMS results from several subjects has relied on simple (fixed effects) metrics, e.g. the group Bayes factor (GBF), that do not account for group heterogeneity or outliers. In this paper, we compare the GBF with two random effects methods for BMS at the between-subject or group level. These methods provide inference on model-space using a classical and Bayesian perspective respectively. First, a classical (frequentist) approach uses the log model evidence as a subject-specific summary statistic. This enables one to use analysis of variance to test for differences in log-evidences over models, relative to inter-subject differences. We then consider the same problem in Bayesian terms and describe a novel hierarchical model, which is optimised to furnish a probability density on the models themselves. This new variational Bayes method rests on treating the model as a random variable and estimating the parameters of a Dirichlet distribution which describes the probabilities for all models considered. These probabilities then define a multinomial distribution over model space, allowing one to compute how likely it is that a specific model generated the data of a randomly chosen subject as well as the exceedance probability of one model being more likely than any other model. Using empirical and synthetic data, we show that optimising a conditional density of the model probabilities, given the log-evidences for each model over subjects, is more informative and appropriate than both the GBF and frequentist tests of the log-evidences. In particular, we found that the hierarchical Bayesian approach is considerably more robust than either of the other approaches in the presence of outliers. We expect that this new random effects method will prove useful for a wide range of group studies, not only in the context of DCM, but also for other modelling endeavours, e.g. comparing different source reconstruction methods for EEG/MEG or selecting among competing computational models of learning and decision-making.
Journal Article
Bayesian inference for psychology. Part I: Theoretical advantages and practical ramifications
by
Verhagen, Josine
,
Morey, Richard D.
,
Matzke, Dora
in
Bayes Theorem
,
Bayesian analysis
,
Behavioral Science and Psychology
2018
Bayesian parameter estimation and Bayesian hypothesis testing present attractive alternatives to classical inference using confidence intervals and
p
values. In part I of this series we outline ten prominent advantages of the Bayesian approach. Many of these advantages translate to concrete opportunities for pragmatic researchers. For instance, Bayesian hypothesis testing allows researchers to quantify evidence and monitor its progression as data come in, without needing to know the intention with which the data were collected. We end by countering several objections to Bayesian hypothesis testing. Part II of this series discusses JASP, a free and open source software program that makes it easy to conduct Bayesian estimation and testing for a range of popular statistical scenarios (Wagenmakers et al.
this issue
).
Journal Article
Using Bayes factor hypothesis testing in neuroscience to establish evidence of absence
by
Gazzola Valeria
,
Keysers, Christian
,
Eric-Jan, Wagenmakers
in
Bayesian analysis
,
Hypotheses
,
Hypothesis testing
2020
Most neuroscientists would agree that for brain research to progress, we have to know which experimental manipulations have no effect as much as we must identify those that do have an effect. The dominant statistical approaches used in neuroscience rely on P values and can establish the latter but not the former. This makes non-significant findings difficult to interpret: do they support the null hypothesis or are they simply not informative? Here we show how Bayesian hypothesis testing can be used in neuroscience studies to establish both whether there is evidence of absence and whether there is absence of evidence. Through simple tutorial-style examples of Bayesian t-tests and ANOVA using the open-source project JASP, this article aims to empower neuroscientists to use this approach to provide compelling and rigorous evidence for the absence of an effect.Keysers et al. show why P values do not differentiate inconclusive null findings from those that provide important evidence for the absence of an effect. They provide a tutorial on how to use Bayesian hypothesis testing to overcome this issue.
Journal Article
Four reasons to prefer Bayesian analyses over significance testing
by
Mclatchie, Neil
,
Dienes, Zoltan
in
Bayes Theorem
,
Bayesian analysis
,
Behavioral Science and Psychology
2018
Inference using significance testing and Bayes factors is compared and contrasted in five case studies based on real research. The first study illustrates that the methods will often agree, both in motivating researchers to conclude that H1 is supported better than H0, and the other way round, that H0 is better supported than H1. The next four, however, show that the methods will also often disagree. In these cases, the aim of the paper will be to motivate the sensible evidential conclusion, and then see which approach matches those intuitions. Specifically, it is shown that a high-powered non-significant result is consistent with no evidence for H0 over H1 worth mentioning, which a Bayes factor can show, and, conversely, that a low-powered non-significant result is consistent with substantial evidence for H0 over H1, again indicated by Bayesian analyses. The fourth study illustrates that a high-powered significant result may not amount to any evidence for H1 over H0, matching the Bayesian conclusion. Finally, the fifth study illustrates that different theories can be evidentially supported to different degrees by the same data; a fact that
P
-values cannot reflect but Bayes factors can. It is argued that appropriate conclusions match the Bayesian inferences, but not those based on significance testing, where they disagree.
Journal Article