Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
11,461 result(s) for "Estimation Effect"
Sort by:
Matching methods to quantify wildfire effects on forest carbon mass in the U.S. Pacific Northwest
Forest wildfires consume and redistribute carbon within forest carbon pools. Because the incidence of wildfires is unpredictable, quantifying wildfire effects is challenging due to the lack of prefire data or controls from experiments over a large landscape. We explored a quasi-experimental method, propensity score matching, to estimate wildfire effects on aboveground forest woody carbon mass in Washington and Oregon, United States. Observational data, including national forest inventory plot measurements and satellite imagery metrics, were utilized to obtain a control set of unburned plots that are comparable to burned plots in terms of environmental conditions as well as spatial locations. Three matching methods were implemented: propensity score matching (PSM), spatial matching (SM), and distance- adjusted propensity score matching (DAPSM). We investigated if propensity score matching with and without spatial adjustment led to different outcomes in terms of (1) balance in covariate distributions between burned and control plots, (2) mean carbon mass obtained from the selected control plots compared to burned and all unburned plots, and (3) estimates of wildfire effects by burn severity. We found that PSM and SM, which use only the environmental covariate set or the spatial distance for estimating propensity scores, respectively, did not appear to produce a comparable set of control plots in terms of the estimated propensity scores and the outcomes of mean carbon mass. DAPSM was the preferred method both in balancing the observed covariates and in dealing with unobservable confounding variables through spatial adjustment. The average wildfire effects estimated by DAPSM showed clear evidence of redistribution of carbon among aboveground woody pools, from live to dead trees, but the consumption of total woody carbon by wildfire was not substantial. Only moderate burn severity led to significant reduction of total woody carbon mass across Washington and Oregon forests (64% of control plots remained on average). This study provides an applied example of a quasi-experimental approach to quantify the effects of a natural disturbance for which experimental settings are unavailable. The study results suggest that incorporating spatial information in addition to environmental covariates would yield a comparable set of control plots for wildfire effects quantification
Causal inference for time series analysis: problems, methods and evaluation
Time series data are a collection of chronological observations which are generated by several domains such as medical and financial fields. Over the years, different tasks such as classification, forecasting and clustering have been proposed to analyze this type of data. Time series data have been also used to study the effect of interventions overtime. Moreover, in many fields of science, learning the causal structure of dynamic systems and time series data is considered an interesting task which plays an important role in scientific discoveries. Estimating the effect of an intervention and identifying the causal relations from the data can be performed via causal inference. Existing surveys on time series discuss traditional tasks such as classification and forecasting or explain the details of the approaches proposed to solve a specific task. In this paper, we focus on two causal inference tasks, i.e., treatment effect estimation and causal discovery for time series data and provide a comprehensive review of the approaches in each task. Furthermore, we curate a list of commonly used evaluation metrics and datasets for each task and provide an in-depth insight. These metrics and datasets can serve as benchmark for research in the field.
Mendelian randomization with a binary exposure variable: interpretation and presentation of causal estimates
Mendelian randomization uses genetic variants to make causal inferences about a modifiable exposure. Subject to a genetic variant satisfying the instrumental variable assumptions, an association between the variant and outcome implies a causal effect of the exposure on the outcome. Complications arise with a binary exposure that is a dichotomization of a continuous risk factor (for example, hypertension is a dichotomization of blood pressure). This can lead to violation of the exclusion restriction assumption: the genetic variant can influence the outcome via the continuous risk factor even if the binary exposure does not change. Provided the instrumental variable assumptions are satisfied for the underlying continuous risk factor, causal inferences for the binary exposure are valid for the continuous risk factor. Causal estimates for the binary exposure assume the causal effect is a stepwise function at the point of dichotomization. Even then, estimation requires further parametric assumptions. Under monotonicity, the causal estimate represents the average causal effect in 'cornpliers', individuals for whom the binary exposure would be present if they have the genetic variant and absent otherwise. Unlike in randomized trials, genetic compliers are unlikely to be a large or representative subgroup of the population. Under homogeneity, the causal effect of the exposure on the outcome is assumed constant in all individuals; rarely a plausible assumption. We here provide methods for causal estimation with a binary exposure (although subject to all the above caveats). Mendelian randomization investigations with a dichotomized binary exposure should be conceptualized in terms of an underlying continuous variable.
Do School Food Programs Improve Child Dietary Quality?
This paper estimates the impact of U.S. school food programs on the distribution of child dietary quality during 2005-10. The distributional approach allows one to better understand how school food impacts children prone to low-quality diets separately from those prone to higher-quality diets. Using a fixed-effects quantile estimator, I find notable heterogeneity in the general population — school food has positive impacts below the median of the dietary-quality distribution, and negative but insignificant impacts at upper quantiles. Children demonstrating substantial nutritional needs (i.e., food insecure or receiving free/reduced price meals) exhibit positive impacts at all levels of diet quality with especially high benefits at low quantiles. Although school food programs may not benefit the \"above-average\" child, they do improve the diets of the most nutritionally disadvantaged.
Remittances and economic growth in developing countries
This paper examines the effect of workers' remittances on economic growth in a sample of 39 developing countries using panel data from 1980–2004 resulting in 195 observations. A standard growth model is estimated using both fixed-effects and random-effects approaches. The empirical results show a significant overall fit based on the fixed-effects method as the random-effects model is rejected in statistical tests. Remittances have a positive impact on growth. Since official estimates of remittances used in our analysis tend to understate actual numbers considerably, more accurate data on remittances is likely to reveal an even more pronounced effect of remittances on growth. Cet article examine l'impact des transferts monétaires effectués par les travailleurs migrants sur la croissance économique d'un échantillon de 39 pays en voie de développement, à partir de données de panel, constitué de 195 observations allant de 1980 à 2004. Deux modèles de croissance standard sont proposés afin d'analyser ces données empiriques, l'un basé sur un modèle à effets fixes, et l'autre sur un modèle à effets aléatoires. Les tests statistiques démontrent que le premier modèle est plus significatif que le second, ce qui suggère que les transferts monétaires effectués par les travailleurs migrants ont bien un impact positif sur la croissance économique du pays d'origine de ces derniers. Puisque les estimations officielles utilisées dans notre étude ont tendance à minimiser considérablement le volume de ces transferts, des données plus précises sont susceptibles de révéler un effet encore plus prononcé.
A survey of deep causal models and their industrial applications
The notion of causality assumes a paramount position within the realm of human cognition. Over the past few decades, there has been significant advancement in the domain of causal effect estimation across various disciplines, including but not limited to computer science, medicine, economics, and industrial applications. Given the continous advancements in deep learning methodologies, there has been a notable surge in its utilization for the estimation of causal effects using counterfactual data. Typically, deep causal models map the characteristics of covariates to a representation space and then design various objective functions to estimate counterfactual data unbiasedly. Different from the existing surveys on causal models in machine learning, this review mainly focuses on the overview of the deep causal models based on neural networks, and its core contributions are as follows: (1) we cast insight on a comprehensive overview of deep causal models from both timeline of development and method classification perspectives; (2) we outline some typical applications of causal effect estimation to industry; (3) we also endeavor to present a detailed categorization and analysis on relevant datasets, source codes and experiments.
Meta-learning for heterogeneous treatment effect estimation with closed-form solvers
This article proposes a meta-learning method for estimating the conditional average treatment effect (CATE) from a few observational data. The proposed method learns how to estimate CATEs from multiple tasks and uses the knowledge for unseen tasks. In the proposed method, based on the meta-learner framework, we decompose the CATE estimation problem into sub-problems. For each sub-problem, we formulate our estimation models using neural networks with task-shared and task-specific parameters. With our formulation, we can obtain optimal task-specific parameters in a closed form that are differentiable with respect to task-shared parameters, making it possible to perform effective meta-learning. The task-shared parameters are trained such that the expected CATE estimation performance in few-shot settings is improved by minimizing the difference between a CATE estimated with a large amount of data and one estimated with just a few data. Our experimental results demonstrate that our method outperforms the existing meta-learning approaches and CATE estimation methods.
Unemployment and Crime: Is There a Connection?
A panel of Swedish counties over the years 1988-1999 is used to study the effects of unemployment on property crime rates. The period under study is characterized by turbulence in the labor market-the variation in unemployment rates was unprecedented in the latter part of the century. Hence, the data provide a unique opportunity to examine unemployment effects. According to the theory of economics of crime, increased unemployment rates lead to higher property crime rates. A fixed-efects model is estimated to investigate this hypothesis. The model includes time- and county-specific effects and a number of economic and socio-demographic variables to control for unobservables and covariates. The results show that unemployment had a positive and significant effect on some property crimes (burglary, car theft and bike theft).
Another Look at the EWMA Control Chart with Estimated Parameters
When in-control process parameters are estimated, Phase II control chart performance will vary among practitioners due to the use of different Phase I data sets. The typical measure of Phase II control chart performance, the average run length (ARL), becomes a random variable due to the selection of a Phase I data set for estimation. Aspects of the ARL distribution, such as the standard deviation of the average run length (SDARL), can be used to quantify the between-practitioner variability in control chart performance. In this article, we assess the in-control performance of the exponentially weighted moving average (EWMA) control chart in terms of the SDARL and percentiles of the ARL distribution when the process parameters are estimated. Our results show that the EWMA chart requires a much larger amount of Phase I data than previously recommended in the literature in order to sufficiently reduce the variation in the chart performance. We show that larger values of the EWMA smoothing constant result in higher levels of variability in the in-control ARL distribution; thus, more Phase I data are required for charts with larger smoothing constants. Because it could be extremely difficult to lower the variation in the in-control ARL values sufficiently due to practical limitations on the amount of the Phase I data, we recommend an alternative design criterion and a procedure based on the bootstrap approach.
A multistate transition model for survival estimation in randomized trials with treatment switching and a cured subgroup
Although many methods have been proposed on the overall survival estimation in randomized trials permitting treatment switching after the progressive disease (PD), the cured subgroup of patients within these trials has not been fully considered. These cured patients would never experience PD and subsequent risk of treatment switching, yet they may suffer death hazard similar to those without the disease. Due to the mix of the cured subgroup, existing methods may yield biased effect estimation for the uncured patients between treatment groups. To address this limitation, we propose a multistate transition model that integrates multi-states of the cure, PD, treatment switching, and death during trials. In this model, the cure probability for all the patients and the death hazard of the cured subgroup are modeled separately. Meanwhile, the semi-competing risks model is used for the treatment effect evaluation on the uncured patients through transitional hazards between states of PD, treatment switching, and death. The particle swarm optimization algorithm is employed to estimate the model parameters. Extensive simulation studies have been conducted to assess the performance of the proposed multistate model in comparison with existing treatment switching adjustment methods. The results show that the treatment effect estimations of our proposed model are more accurate across all scenarios. Moreover, the illustration based on a simulated diffuse large B-cell lymphoma trial demonstrates the applicability and advantages of the proposed model. The robustness of the proposed multistate transition model enables it to accurately estimate the treatment effect in trials that involve a cured subgroup and the treatment switching after PD.