Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectCountry Of PublicationPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
101,878
result(s) for
"Inference."
Sort by:
The book of why : the new science of cause and effect
\"Everyone has heard the claim, 'Correlation does not imply causation.' What might sound like a reasonable dictum metastasized in the twentieth century into one of science's biggest obstacles, as a legion of researchers became unwilling to make the claim that one thing could cause another. Even two decades ago, asking a statistician a question like 'Was it the aspirin that stopped my headache?' would have been like asking if he believed in voodoo, or at best a topic for conversation at a cocktail party rather than a legitimate target of scientific inquiry. Scientists were allowed to posit only that the probability that one thing was associated with another. This all changed with Judea Pearl\"-- Provided by publisher.
Sensitivity analysis of individual treatment effects
2023
We propose a model-free framework for sensitivity analysis of individual treatment effects (ITEs), building upon ideas from conformal inference. For any unit, our procedure reports the 0-value, a number which quantifies the minimum strength of confounding needed to explain away the evidence for ITE. Our approach rests on the reliable predictive inference of counterfactuals and ITEs in situations where the training data are confounded. Under the marginal sensitivity model of [Z. Tan, J. Am. Stat. Assoc. 101, 1619-1637 (2006)], we characterize the shift between the distribution of the observations and that of the counterfactuals. We first develop a general method for predictive inference of test samples from a shifted distribution; we then leverage this to construct covariate-dependent prediction sets for counterfactuals. No matter the value of the shift, these prediction sets (resp. approximately) achieve marginal coverage if the propensity score is known exactly (resp. estimated). We describe a distinct procedure also attaining coverage, however, conditional on the training data. In the latter case, we prove a sharpness result showing that for certain classes of prediction problems, the prediction intervals cannot possibly be tightened. We verify the validity and performance of the methods via simulation studies and apply them to analyze real datasets.
Journal Article
The frontier of simulation-based inference
by
Cranmer, Kyle
,
Louppe, Gilles
,
Brehmer, Johann
in
Approximate Bayesian Computation
,
COLLOQUIUM PAPERS
,
Computer science
2020
Many domains of science have developed complex simulations to describe phenomena of interest. While these simulations provide high-fidelity models, they are poorly suited for inference and lead to challenging inverse problems. We review the rapidly developing field of simulation-based inference and identify the forces giving additional momentum to the field. Finally, we describe how the frontier is expanding so that a broad audience can appreciate the profound influence these developments may have on science.
Journal Article
Determinantal point process models and statistical inference
by
Rubak, Ege
,
Møller, Jesper
,
Lavancier, Frédéric
in
Analysis
,
Approximation
,
computer software
2015
Statistical models and methods for determinantal point processes (DPPs) seem largely unexplored. We demonstrate that DPPs provide useful models for the description of spatial point pattern data sets where nearby points repel each other. Such data are usually modelled by Gibbs point processes, where the likelihood and moment expressions are intractable and simulations are time consuming. We exploit the appealing probabilistic properties of DPPs to develop parametric models, where the likelihood and moment expressions can be easily evaluated and realizations can be quickly simulated. We discuss how statistical inference is conducted by using the likelihood or moment properties of DPP models, and we provide freely available software for simulation and statistical inference.
Journal Article
Observation and experiment : an introduction to causal inference
We hear that a glass of red wine prolongs life, that alcohol is a carcinogen, that pregnant women should drink not a drop of alcohol. Major medical journals first claimed that hormone replacement therapy reduces the risk of heart disease, then reversed themselves and said it increases the risk of heart disease. What are the effects caused by consuming alcohol or by receiving hormone replacement therapy? These are causal questions, questions about the effects caused by treatments, policies or preventable exposures. Some causal questions can be studied in randomized trials, in which a coin is flipped to decide the treatment for the next experimental subject. Because randomized trials are not always practical, nor always ethical, many causal questions are investigated in non-randomized observational studies. The reversal of opinion about hormone replacement therapy occurred when a randomized clinical trial contradicted a series of earlier observational studies. Using minimal mathematics - high school algebra and coin flips -- and numerous examples, Observation and Experiment explains the key concepts and methods of causal inference. Examples of randomized experiments and observational studies are drawn from clinical medicine, economics, public health and epidemiology, clinical psychology and psychiatry.-- Provided by publisher
What Is Meant by \Missing at Random\?
by
Carlin, John
,
Seaman, Shaun
,
Galati, John
in
Bayesian inference
,
Conditional probabilities
,
direct-likelihood inference
2013
The concept of missing at random is central in the literature on statistical analysis with missing data. In general, inference using incomplete data should be based not only on observed data values but should also take account of the pattern of missing values. However, it is often said that if data are missing at random, valid inference using likelihood approaches (including Bayesian) can be obtained ignoring the missingness mechanism. Unfortunately, the term \"missing at random\" has been used inconsistently and not always clearly; there has also been a lack of clarity around the meaning of \"valid inference using likelihood\". These issues have created potential for confusion about the exact conditions under which the missingness mechanism can be ignored, and perhaps fed confusion around the meaning of \"analysis ignoring the missingness mechanism\". Here we provide standardised precise definitions of \"missing at random\" and \"missing completely at random\", in order to promote unification of the theory. Using these definitions we clarify the conditions that suffice for \"valid inference\" to be obtained under a variety of inferential paradigms.
Journal Article
Machine learning and deep learning—A review for ecologists
2023
The popularity of machine learning (ML), deep learning (DL) and artificial intelligence (AI) has risen sharply in recent years. Despite this spike in popularity, the inner workings of ML and DL algorithms are often perceived as opaque, and their relationship to classical data analysis tools remains debated. Although it is often assumed that ML and DL excel primarily at making predictions, ML and DL can also be used for analytical tasks traditionally addressed with statistical models. Moreover, most recent discussions and reviews on ML focus mainly on DL, failing to synthesise the wealth of ML algorithms with different advantages and general principles. Here, we provide a comprehensive overview of the field of ML and DL, starting by summarizing its historical developments, existing algorithm families, differences to traditional statistical tools, and universal ML principles. We then discuss why and when ML and DL models excel at prediction tasks and where they could offer alternatives to traditional statistical methods for inference, highlighting current and emerging applications for ecological problems. Finally, we summarize emerging trends such as scientific and causal ML, explainable AI, and responsible AI that may significantly impact ecological data analysis in the future. We conclude that ML and DL are powerful new tools for predictive modelling and data analysis. The superior performance of ML and DL algorithms compared to statistical models can be explained by their higher flexibility and automatic data‐dependent complexity optimization. However, their use for causal inference is still disputed as the focus of ML and DL methods on predictions creates challenges for the interpretation of these models. Nevertheless, we expect ML and DL to become an indispensable tool in ecology and evolution, comparable to other traditional statistical tools.
Journal Article