Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
46,464 result(s) for "Statistical power analysis."
Sort by:
Statistical power analysis and sample size planning for moderated mediation models
Conditional process models, including moderated mediation models and mediated moderation models, are widely used in behavioral science research. However, few studies have examined approaches to conduct statistical power analysis for such models and there is also a lack of software packages that provide such power analysis functionalities. In this paper, we introduce new simulation-based methods for power analysis of conditional process models with a focus on moderated mediation models. These simulation-based methods provide intuitive ways for sample-size planning based on regression coefficients in a moderated mediation model as well as selected variance and covariance components. We demonstrate how the methods can be applied to five commonly used moderated mediation models using a simulation study, and we also assess the performance of the methods through the five models. We implement our approaches in the WebPower R package and also in Web apps to ease their application.
A Meta-Analysis of the Role of Environment-Based Voluntariness in Information Technology Acceptance
The technology acceptance model (TAM) asserts that ease of use and usefulness are two primary determinants of behavioral intention and usage. A parallel research stream emphasizes voluntariness, a key social influence and contextual variable, as a critical factor in information technology (IT) adoption, but pays little attention to its role in TAM. This paper addresses this particular absence by investigating the impact of environment-based voluntariness on the relationships among the four primary TAM constructs. A meta-analysis of 71 empirical studies provides strong support for the hypotheses that environment-based voluntariness moderates the effects of ease of use and usefulness on behavioral intention, but not the effect of ease of use on usefulness. Moreover, inconsistent with our expectations, environment-based voluntariness does not moderate the effects of ease of use and usefulness on usage. By further analyzing the data set, we suggest this may be because of the relatively small sample size, the presence of other factors, or the inappropriate measurement of usage in previous studies. The current study contributes not only to the distinction between user-based and environment-based voluntariness but also to a more complete understanding of user acceptance of IT across system-use environments.
Statistical power analysis for the social and behavioral sciences : basic and advanced techniques
\"This will be the first book to demonstrate the application of power analysis to the newer more advanced techniques such as hierarchical linear modeling, meta-analysis, and structural equation modelling that are increasingly popular in behavioral and social science research\"-- Provided by publisher.
Non-response bias assessment in logistics survey research: use fewer tests?
Purpose – The purpose of this paper is to consider the concepts of individual and complete statistical power used for multiple testing and shows their relevance for determining the number of statistical tests to perform when assessing non-response bias. Design/methodology/approach – A statistical power analysis of 55 survey-based research papers published in three prestigious logistics journals (International Journal of Physical Distribution and Logistics Management, Journal of Business Logistics, Transportation Journal) over the last decade was conducted. Findings – Results show that some of the low complete power levels encountered could have been avoided if fewer tests had been used in the assessment of non-response bias. Originality/value – The research offers important recommendations to scholars engaged in survey research as they assess the effects of non-respondents on research findings. By following the recommended strategies for testing non-response bias, researchers can improve the statistical power of their findings.
Development of a General Statistical Analytical System Using Nationally Standardized Medical Information
In Japan, since the Next Generation Medical Infrastructure Act regarding anonymized medical data contributing to R&D came into force in 2018, it is expected to exploit medical data for R&D. The Millennial Medical Record Project has been collected a large amount of standardized medical data of a number of hospitals stored in a database under the act. In order for users to widely exploit the medical data when carrying out trial-and-error, there is a difficulty of data access because of a highly secured management of non-anonymous medical data. To solve the data access problem, we develop a general statistical analytical system for executing a variety of statistical significance tests with statistical power analysis in an environment of trial-and-error for users’ analyses without programming. In the analytical system, the front-end is a registration form as the input and the analysis results as the output on Microsoft Excel, and the back-end is based on Python, R and SQL. Although the fixed registration form covers limited application for the analysis, since the analysis results using the stored Millennial Medical Record data is provided in a short time without collecting the necessary data for the analysis, the exploitation of medical data could widely and rapidly promote by medical experts/researchers in the manner of trial-and-error. The developed system could apply to make protocols for clinical research and clinical trial, and the potential to discover real-world evidence could be increased.
Quantifying the sampling error on burn counts in Monte-Carlo wildfire simulations using Poisson and Gamma distributions
This article provides a precise, quantitative description of the sampling error on burn counts in Monte-Carlo wildfire simulations - that is, the prediction variability introduced by the fact that the set of simulated fires is random and finite. We show that the marginal burn counts are (very nearly) Poisson-distributed in typical settings and infer through Bayesian updating that Gamma distributions are suitable summaries of the remaining uncertainty. In particular, the coefficient of variation of the burn count is equal to the inverse square root of its expected value, and this expected value is proportional to the number of simulated fires multiplied by the asymptotic burn probability. From these results, we derive practical guidelines for choosing the number of simulated fires and estimating the sampling error. Notably, the required number of simulated years is expressed as a power law. Such findings promise to relieve fire modelers of resource-consuming iterative experiments for sizing simulations and assessing their convergence: statistical theory provides better answers, faster.
Power Analysis for Two-Level Multisite Randomized Cost-Effectiveness Trials
Cost-effectiveness analysis is a widely used educational evaluation tool. The randomized controlled trials that aim to evaluate the cost-effectiveness of the treatment are commonly referred to as randomized cost-effectiveness trials (RCETs). This study provides methods of power analysis for two-level multisite RCETs. Power computations take account of sample sizes, the effect size, covariates effects, nesting effects for both cost and effectiveness measures, the ratio of the total variance of the cost measure to the total variance of effectiveness measure, and correlations between cost and effectiveness measures at each level. Illustrative examples that show how power is influenced by the sample sizes, nesting effects, covariate effects, and correlations between cost and effectiveness measures are presented. We also demonstrate how the calculations can be applied in the design phase of two-level multisite RCETs using the software PowerUp!-CEA (Version 1.0).
Design and analysis of cluster randomized trials
Cluster randomized trials (CRTs) are commonly used to evaluate the causal effects of educational interventions, where the entire clusters (e.g., schools) are randomly assigned to treatment or control conditions. This study introduces statistical methods for designing and analyzing two-level (e.g., students nested within schools) and three-level (e.g., students nested within classrooms nested within schools) CRTs. Specifically, we utilize hierarchical linear models (HLMs) to account for the dependency of the intervention participants within the same clusters, estimating the average treatment effects (ATEs) of educational interventions and other effects of interest (e.g., moderator and mediator effects). We demonstrate methods and tools for sample size planning and statistical power analysis. Additionally, we discuss common challenges and potential solutions in the design and analysis phases, including the effects of omitting one level of clustering, non-compliance, heterogeneous variance, blocking, threats to external validity, and cost-effectiveness of the intervention. We conclude with some practical suggestions for CRT design and analysis, along with recommendations for further readings.