Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
3,198 result(s) for "type-I error"
Sort by:
The blame game
The blame game, with its finger-pointing and mutual buck-passing, is a familiar feature of politics and organizational life, and blame avoidance pervades government and public organizations at every level. Political and bureaucratic blame games and blame avoidance are more often condemned than analyzed. InThe Blame Game, Christopher Hood takes a different approach by showing how blame avoidance shapes the workings of government and public services. Arguing that the blaming phenomenon is not all bad, Hood demonstrates that it can actually help to pin down responsibility, and he examines different kinds of blame avoidance, both positive and negative. Hood traces how the main forms of blame avoidance manifest themselves in presentational and \"spin\" activity, the architecture of organizations, and the shaping of standard operating routines. He analyzes the scope and limits of blame avoidance, and he considers how it plays out in old and new areas, such as those offered by the digital age of websites and e-mail. Hood assesses the effects of this behavior, from high-level problems of democratic accountability trails going cold to the frustrations of dealing with organizations whose procedures seem to ensure that no one is responsible for anything. Delving into the inner workings of complex institutions,The Blame Gameproves how a better understanding of blame avoidance can improve the quality of modern governance, management, and organizational design.
Beyond Power Calculations: Assessing Type S (Sign) and Type M (Magnitude) Errors
Statistical power analysis provides the conventional approach to assess error rates when designing a research study. However, power analysis is flawed in that a narrow emphasis on statistical significance is placed as the primary focus of study design. In noisy, small-sample settings, statistically significant results can often be misleading. To help researchers address this problem in the context of their own studies, we recommend design calculations in which (a) the probability of an estimate being in the wrong direction (Type S [sign] error) and (b) the factor by which the magnitude of an effect might be overestimated (Type M [magnitude] error or exaggeration ratio) are estimated. We illustrate with examples from recent published research and discuss the largest challenge in a design calculation: coming up with reasonable estimates of plausible effect sizes based on external information.
测绘地理信息产品抽样检验的两类错误概率
P207; 针对测绘地理信息产品抽样检验的要求,给出了两类错误概率的计算方法,并通过各种不同抽样方案的两类错误概率值,分析了现行的国标GB/T24356—2009抽样检验的特点.同时针对大批量整体抽样检验和分批抽样检验,通过实例计算,说明了大批量整体检验与分批检验对整体通过概率的等价性以及分批检验的不合理性.基于\"纳伪\"概率值分析,提出了低合格率总体条件下,抽样检验自身的高\"纳伪\"概率将导致抽样检验的失效性,并通过实际算例计算了导致检验结论失真的质量边界,对测绘地理信息产品的质量风险控制具有重要意义.
Multiple secondary outcome analyses: precise interpretation is important
Analysis of multiple secondary outcomes in a clinical trial leads to an increased probability of at least one false significant result among all secondary outcomes studied. In this paper, we question the notion that that if no multiplicity adjustment has been applied to multiple secondary outcome analyses in a clinical trial, then they must necessarily be regarded as exploratory. Instead, we argue that if individual secondary outcome results are interpreted carefully and precisely, there is no need to downgrade our interpretation to exploratory. This is because the probability of a false significant result for each comparison, the per-comparison wise error rate, does not increase with multiple testing. Strong effects on secondary outcomes should always be taken seriously and must not be dismissed purely on the basis of multiplicity concerns.
Source localization in resource-constrained sensor networks based on deep learning
Source localization with a network of low-cost motes with limited processing, memory, and energy resources is considered in this paper. The state-of-the-art methods are mostly based on complicated signal processing approaches in which motes send their (processed) data to a fusion center (FC) wherein the source is localized. These methods are resource-demanding and mostly do not meet the limitations of motes and network. In this paper, we consider distributed detection where each mote performs a binary hypothesis test to detect locally the existence of a desired source and sends its (potentially erroneous) decision to FC during just one bit (1 indicates source existence and 0 otherwise). Hence, both processing and bandwidth constraints are met. We propose to use an artificial neural network (ANN) to correct erroneous local decisions. After error correction, the region affected by the source is specified by nodes with decision 1. Moreover, we propose to localize the source by deep learning in FC which converts the network of decisions 1 and 0 to a black and white image with white pixels in the locations of motes with decision 1. The proposed schemes of error correction by ANN (ECANN) and source localization with deep learning (SoLDeL) were evaluated in a fire detection application. We showed that SoLDeL performs appropriately and scales well into large networks. Moreover, the applicability of ECANN in delineation of farm management zones was illustrated.
Optimal Bandwidth Selection in Heteroskedasticity-Autocorrelation Robust Testing
This paper considers studentized tests in time series regressions with nonparametrically autocorrelated errors. The studentization is based on robust standard errors with truncation lag M = bT for some constant b ∊ (0, 1] and sample size T. It is shown that the nonstandard fixed-b limit distributions of such nonparametrically studentized tests provide more accurate approximations to the finite sample distributions than the standard small-b limit distribution. We further show that, for typical economic time series, the optimal bandwidth that minimizes a weighted average of type I and type II errors is larger by an order of magnitude than the bandwidth that minimizes the asymptotic mean squared error of the corresponding long-run variance estimator. A plug-in procedure for implementing this optimal bandwidth is suggested and simulations (not reported here) confirm that the new plug-in procedure works well in finite samples.
Living systematic reviews: 3. Statistical methods for updating meta-analyses
A living systematic review (LSR) should keep the review current as new research evidence emerges. Any meta-analyses included in the review will also need updating as new material is identified. If the aim of the review is solely to present the best current evidence standard meta-analysis may be sufficient, provided reviewers are aware that results may change at later updates. If the review is used in a decision-making context, more caution may be needed. When using standard meta-analysis methods, the chance of incorrectly concluding that any updated meta-analysis is statistically significant when there is no effect (the type I error) increases rapidly as more updates are performed. Inaccurate estimation of any heterogeneity across studies may also lead to inappropriate conclusions. This paper considers four methods to avoid some of these statistical problems when updating meta-analyses: two methods, that is, law of the iterated logarithm and the Shuster method control primarily for inflation of type I error and two other methods, that is, trial sequential analysis and sequential meta-analysis control for type I and II errors (failing to detect a genuine effect) and take account of heterogeneity. This paper compares the methods and considers how they could be applied to LSRs.
Moment reconstruction and moment-adjusted imputation when exposure is generated by a complex, nonlinear random effects modeling process
For the classical, homoscedastic measurement error model, moment reconstruction (Freedman et al., 2004, 2008) and moment-adjusted imputation (Thomas et al., 2011) are appealing, computationally simple imputation-like methods for general model fitting. Like classical regression calibration, the idea is to replace the unobserved variable subject to measurement error with a proxy that can be used in a variety of analyses. Moment reconstruction and moment-adjusted imputation differ from regression calibration in that they attempt to match multiple features of the latent variable, and also to match some of the latent variable's relationships with the response and additional covariates. In this note, we consider a problem where true exposure is generated by a complex, nonlinear random effects modeling process, and develop analogues of moment reconstruction and moment-adjusted imputation for this case. This general model includes classical measurement errors, Berkson measurement errors, mixtures of Berkson and classical errors and problems that are not measurement error problems, but also cases where the data-generating process for true exposure is a complex, nonlinear random effects modeling process. The methods are illustrated using the National Institutes of Health-AARP Diet and Health Study where the latent variable is a dietary pattern score called the Healthy Eating Index-2005. We also show how our general model includes methods used in radiation epidemiology as a special case. Simulations are used to illustrate the methods.
Underappreciated problems of low replication in ecological field studies
The cost and difficulty of manipulative field studies makes low statistical power a pervasive issue throughout most ecological subdisciplines. Ecologists are already aware that small sample sizes increase the probability of committing Type II errors. In this article, we address a relatively unknown problem with low power: underpowered studies must overestimate small effect sizes in order to achieve statistical significance. First, we describe how low replication coupled with weak effect sizes leads to Type M errors, or exaggerated effect sizes. We then conduct a meta-analysis to determine the average statistical power and Type M error rate for manipulative field experiments that address important questions related to global change; global warming, biodiversity loss, and drought. Finally, we provide recommendations for avoiding Type M errors and constraining estimates of effect size from underpowered studies.
Trial sequential analysis may establish when firm evidence is reached in cumulative meta-analysis
Cumulative meta-analyses are prone to produce spurious P < 0.05 because of repeated testing of significance as trial data accumulate. Information size in a meta-analysis should at least equal the sample size of an adequately powered trial. Trial sequential analysis (TSA) corresponds to group sequential analysis of a single trial and may be applied to meta-analysis to evaluate the evidence. Six randomly selected neonatal meta-analyses with at least five trials reporting a binary outcome were examined. Low-bias heterogeneity-adjusted information size and information size determined from an assumed intervention effect of 15% were calculated. These were used for constructing trial sequential monitoring boundaries. We assessed the cumulative z-curves' crossing of P = 0.05 and the boundaries. Five meta-analyses showed early potentially spurious P < 0.05 values. In three significant meta-analyses the cumulative z-curves crossed both boundaries, establishing firm evidence of an intervention effect. In two nonsignificant meta-analyses the cumulative z-curves crossed P = 0.05, but never the boundaries, demonstrating early potentially spurious P < 0.05 values. In one nonsignificant meta-analysis the cumulative z-curves never crossed P = 0.05 or the boundaries. TSAs may establish when firm evidence is reached in meta-analysis.