Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Reading LevelReading Level
-
Content TypeContent Type
-
YearFrom:-To:
-
More FiltersMore FiltersItem TypeIs Full-Text AvailableSubjectCountry Of PublicationPublisherSourceTarget AudienceLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
307,835
result(s) for
"treatment effect"
Sort by:
The DOSE effect : optimize your brain and body by boosting your dopamine, oxytocin, serotonin, and endorphins
by
Power, Tj, author
,
Robinson, Chris, illustrator
,
Power, Thomas Jefferson, author
in
Brain chemistry Popular works.
,
Neurotransmitters Popular works.
,
Dopamine Physiological effect.
2025
\"A neuroscientist's powerful framework for enhancing quality of life through the regulation of four key hormones: Dopamine, Oxytocin, Serotonin, and Endorphins (DOSE). You have everything you need to optimize your brain chemistry--this groundbreaking book shows you how\"-- Publisher description.
Decomposing Treatment Effect Variation
2019
Understanding and characterizing treatment effect variation in randomized experiments has become essential for going beyond the \"black box\" of the average treatment effect. Nonetheless, traditional statistical approaches often ignore or assume away such variation. In the context of randomized experiments, this article proposes a framework for decomposing overall treatment effect variation into a systematic component explained by observed covariates and a remaining idiosyncratic component. Our framework is fully randomization-based, with estimates of treatment effect variation that are entirely justified by the randomization itself. Our framework can also account for noncompliance, which is an important practical complication. We make several contributions. First, we show that randomization-based estimates of systematic variation are very similar in form to estimates from fully interacted linear regression and two-stage least squares. Second, we use these estimators to develop an omnibus test for systematic treatment effect variation, both with and without noncompliance. Third, we propose an R
2
-like measure of treatment effect variation explained by covariates and, when applicable, noncompliance. Finally, we assess these methods via simulation studies and apply them to the Head Start Impact Study, a large-scale randomized experiment. Supplementary materials for this article are available online.
Journal Article
USING INSTRUMENTAL VARIABLES FOR INFERENCE ABOUT POLICY RELEVANT TREATMENT PARAMETERS
2018
We propose a method for using instrumental variables (IV) to draw inference about causal effects for individuals other than those affected by the instrument at hand. Policy relevance and external validity turn on the ability to do this reliably. Our method exploits the insight that both the IV estimand and many treatment parameters can be expressed as weighted averages of the same underlying marginal treatment effects. Since the weights are identified, knowledge of the IV estimand generally places some restrictions on the unknown marginal treatment effects, and hence on the values of the treatment parameters of interest. We show how to extract information about the treatment parameter of interest from the IV estimand and, more generally, from a class of IV-like estimands that includes the two stage least squares and ordinary least squares estimands, among others. Our method has several applications. First, it can be used to construct nonparametric bounds on the average causal effect of a hypothetical policy change. Second, our method allows the researcher to flexibly incorporate shape restrictions and parametric assumptions, thereby enabling extrapolation of the average effects for compilers to the average effects for different or larger populations. Third, our method can be used to test model specification and hypotheses about behavior, such as no selection bias and/or no selection on gain.
Journal Article
Metalearners for estimating heterogeneous treatment effects using machine learning
by
Bickel, Peter J.
,
Künzel, Sören R.
,
Yu, Bin
in
Algorithms
,
Artificial intelligence
,
Bayesian analysis
2019
There is growing interest in estimating and analyzing heterogeneous treatment effects in experimental and observational studies. We describe a number of metaalgorithms that can take advantage of any supervised learning or regression method in machine learning and statistics to estimate the conditional average treatment effect (CATE) function. Metaalgorithms build on base algorithms—such as random forests (RFs), Bayesian additive regression trees (BARTs), or neural networks—to estimate the CATE, a function that the base algorithms are not designed to estimate directly. We introduce a metaalgorithm, the X-learner, that is provably efficient when the number of units in one treatment group is much larger than in the other and can exploit structural properties of the CATE function. For example, if the CATE function is linear and the response functions in treatment and control are Lipschitz-continuous, the X-learner can still achieve the parametric rate under regularity conditions. We then introduce versions of the X-learner that use RF and BART as base learners. In extensive simulation studies, the X-learner performs favorably, although none of the metalearners is uniformly the best. In two persuasion field experiments from political science, we demonstrate how our X-learner can be used to target treatment regimes and to shed light on underlying mechanisms. A software package is provided that implements our methods.
Journal Article
Quantile Regression: 40 Years On
2017
Since Quetelet's work in the nineteenth century, social science has iconified the average man, that hypothetical man without qualities who is comfortable with his head in the oven and his feet in a bucket of ice. Conventional statistical methods since Quetelet have sought to estimate the effects of policy treatments for this average man. However, such effects are often quite heterogeneous: Medical treatments may improve life expectancy but also impose serious short-term risks; reducing class sizes may improve the performance of good students but not help weaker ones, or vice versa. Quantile regression methods can help to explore these heterogeneous effects. Some recent developments in quantile regression methods are surveyed in this review.
Journal Article
PROGRAM EVALUATION AND CAUSAL INFERENCE WITH HIGH-DIMENSIONAL DATA
2017
In this paper, we provide efficient estimators and honest confidence bands for a variety of treatment effects including local average (LATE) and local quantile treatment effects (LQTE) in data-rich environments. We can handle very many control variables, endogenous receipt of treatment, heterogeneous treatment effects, and function-valued outcomes. Our framework covers the special case of exogenous receipt of treatment, either conditional on controls or unconditionally as in randomized control trials. In the latter case, our approach produces efficient estimators and honest bands for (functional) average treatment effects (ATE) and quantile treatment effects (QTE). To make informative inference possible, we assume that key reduced-form predictive relationships are approximately sparse. This assumption allows the use of regularization and selection methods to estimate those relations, and we provide methods for postregularization and post-selection inference that are uniformly valid (honest) across a wide range of models. We show that a key ingredient enabling honest inference is the use of orthogonal or doubly robust moment conditions in estimating certain reducedform functional parameters. We illustrate the use of the proposed methods with an application to estimating the effect of 401(k) eligibility and participation on accumulated assets. The results on program evaluation are obtained as a consequence of more general results on honest inference in a general moment-condition framework, which arises from structural equation models in econometrics. Here, too, the crucial ingredient is the use of orthogonal moment conditions, which can be constructed from the initial moment conditions. We provide results on honest inference for (function-valued) parameters within this general framework where any high-quality, machine learning methods (e.g., boosted trees, deep neural networks, random forest, and their aggregated and hybrid versions) can be used to learn the nonparametric/high-dimensional components of the model. These include a number of supporting auxiliary results that are of major independent interest: namely, we (1) prove uniform validity of a multiplier bootstrap, (2) offer a uniformly valid functional delta method, and (3) provide results for sparsitybased estimation of regression functions for function-valued outcomes.
Journal Article
Econometric Methods for Program Evaluation
2018
Program evaluation methods are widely applied in economics to assess the effects of policy interventions and other treatments of interest. In this article, we describe the main methodological frameworks of the econometrics of program evaluation. In the process, we delineate some of the directions along which this literature is expanding, discuss recent developments, and highlight specific areas where new research may be particularly fruitful.
Journal Article
An Introductory Guide to Event Study Models
2023
The event study model is a powerful econometric tool used for the purpose of estimating dynamic treatment effects. One of its most appealing features is that it provides a built-in graphical summary of results, which can reveal rich patterns of behavior. Another value of the picture is the estimated pre-event pseudo-\"effects\", which provide a type of placebo test. In this essay I aim to provide a framework for a shared understanding of these models. There are several (sometimes subtle) decisions and choices faced by users of these models, and I offer guidance for these decisions.
Journal Article
Machine learning approaches to evaluate heterogeneous treatment effects in randomized controlled trials: a scoping review
2024
Estimating heterogeneous treatment effects (HTEs) in randomized controlled trials (RCTs) has received substantial attention recently. This has led to the development of several statistical and machine learning (ML) algorithms to assess HTEs through identifying individualized treatment effects. However, a comprehensive review of these algorithms is lacking. We thus aimed to catalog and outline currently available statistical and ML methods for identifying HTEs via effect modeling using clinical RCT data and summarize how they have been applied in practice.
We performed a scoping review using prespecified search terms in MEDLINE and Embase, aiming to identify studies that assessed HTEs using advanced statistical and ML methods in RCT data published from 2010 to 2022.
Among a total of 32 studies identified in the review, 17 studies applied existing algorithms to RCT data, and 15 extended existing algorithms or proposed new algorithms. Applied algorithms included penalized regression, causal forest, Bayesian causal forest, and other metalearner frameworks. Of these methods, causal forest was the most frequently used (7 studies) followed by Bayesian causal forest (4 studies). Most applications were in cardiology (6 studies), followed by psychiatry (4 studies). We provide example R codes in simulated data to illustrate how to implement these algorithms.
This review identified and outlined various algorithms currently used to identify HTEs and individualized treatment effects in RCT data. Given the increasing availability of new algorithms, analysts should carefully select them after examining model performance and considering how the models will be used in practice.
•Methods to assess heterogeneous treatment effects (HTEs) are rapidly developing.•This scoping review identified 32 studies applying such methods to RCT until 2022.•Cardiology was the most popular field of application.•The causal forest was the most frequently applied model in healthcare literature.•This review will help researchers apply appropriate algorithms to assess HTEs.
Journal Article
WHY YOU SHOULD NEVER USE THE HODRICK-PRESCOTT FILTER
2018
Here’s why. (a) The Hodrick-Prescott (HP) filter introduces spurious dynamic relations that have no basis in the underlying data-generating process. (b) Filtered values at the end of the sample are very different from those in the middle and are also characterized by spurious dynamics. (c) A statistical formalization of the problem typically produces values for the smoothing parameter vastly at odds with common practice. (d) There is a better alternative. A regression of the variable at date t on the four most recent values as of date t − h achieves all the objectives sought by users of the HP filter with none of its drawbacks.
Journal Article