Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
505 result(s) for "Multiple comparison methods"
Sort by:
Methods to adjust for multiple comparisons in the analysis and sample size calculation of randomised controlled trials with multiple primary outcomes
Background Multiple primary outcomes may be specified in randomised controlled trials (RCTs). When analysing multiple outcomes it’s important to control the family wise error rate (FWER). A popular approach to do this is to adjust the p -values corresponding to each statistical test used to investigate the intervention effects by using the Bonferroni correction. It’s also important to consider the power of the trial to detect true intervention effects. In the context of multiple outcomes, depending on the clinical objective, the power can be defined as: ‘disjunctive power’ , the probability of detecting at least one true intervention effect across all the outcomes or ‘ marginal power’ the probability of finding a true intervention effect on a nominated outcome. We provide practical recommendations on which method may be used to adjust for multiple comparisons in the sample size calculation and the analysis of RCTs with multiple primary outcomes. We also discuss the implications on the sample size for obtaining 90% disjunctive power and 90% marginal power. Methods We use simulation studies to investigate the disjunctive power, marginal power and FWER obtained after applying Bonferroni, Holm, Hochberg, Dubey/Armitage-Parmar and Stepdown-minP adjustment methods. Different simulation scenarios were constructed by varying the number of outcomes, degree of correlation between the outcomes, intervention effect sizes and proportion of missing data. Results The Bonferroni and Holm methods provide the same disjunctive power. The Hochberg and Hommel methods provide power gains for the analysis, albeit small, in comparison to the Bonferroni method. The Stepdown-minP procedure performs well for complete data. However, it removes participants with missing values prior to the analysis resulting in a loss of power when there are missing data. The sample size requirement to achieve the desired disjunctive power may be smaller than that required to achieve the desired marginal power. The choice between whether to specify a disjunctive or marginal power should depend on the clincial objective.
Scapular motion during shoulder joint extension movement
A few reports on scapular motion during shoulder joint extension exist. Understanding the normal motion of shoulder joint extension may be useful in evaluating and treating patients with diminished or minimal shoulder joint extension. Therefore, this study aimed to identify scapular motion during shoulder joint extension movement in a sitting position. Shoulder joint extension movement in the sitting position were measured in 22 healthy adults (age, 25.8 ± 2.7 years). Shoulder joint extension, scapular upward rotation, anterior tilt, external rotation angles, and the acromion position were investigated using a three-dimensional motion analyzer. The difference from each value of 10° to 50° shoulder joint extension to each value of 0° shoulder joint extension were checked. The results were compared using multiple comparison method. In most participants, the scapula tilted posteriorly up to 30° of the shoulder joint extension and anteriorly after 30°. Scapular upward and external rotation continued to increase with shoulder extension. Furthermore, the acromion was displaced upward and backward. Thus, scapular posterior tilt is necessary for shoulder joint extension during the initial movement, followed by anterior tilt. The acromion may have been displaced posteriorly because of clavicular retraction, causing the scapula to tilt posteriorly. After 30° of shoulder joint extension, the scapular anterior tilt may have prevailed over the scapular posterior tilt.
ANOVA under Unequal Error Variances
By taking a generalized approach to finding p values, the classical F-test of the one-way ANOVA is extended to the case of unequal error variances. The relationship of this result to other solutions in the literature is discussed. An exact test for comparing variances of a number of populations is also developed. Scheffe's procedure of multiple comparison is extended to the case of unequal variances. The possibility and the approach that one can take to extend the results to simple designs involving more than one factor are briefly discussed.
Analysis of Cutting Properties with Reference to Amount of Coolant used in an Environment-Conscious Turning Process
In the recent years, environmentally conscious design and manufacturing technologies have attracted considerable attention. The coolants, lubricants, solvents, metallic chips and discarded tools from manufacturing operations will harm our environment and the earth's ecosystem. In the present work, the Tukey method of multiple comparisons is used to select the minimum level of coolant required in a turning process. The amount of coolant is varied in 270 designed experiments and the parameters cutting temperature, surface roughness, and specific cutting energy are carefully evaluated. The effects of coolant mix ratio as well as the amount of coolant on the turning process are studied in the present work. The cutting temperature and surface roughness for different quantity of coolant are investigated by analysis of variance (ANOVA)-test and a multiple comparison method. ANOVA-test results signify that the average tool temperature and surface roughness depend on the amount of coolant. Based on Tukey's Honestly Significant Difference (HSD) method, one of the multiple comparison methods, the minimum level of coolant is 1.0 L/min with 2% mix ratio in the aspect of controlling tool temperature. F-test concludes that the amount of coolant used does not have any significant effect on specific cutting energy. Finally, Tukey method ascertains that 0.5 L/min with 6% mix ratio is the minimum level of coolant required in turning process without any serious degradation of the surface finish. Considering all aspects of cutting, the minimum coolant required is 1.0 L/min with 6% mix ratio. It is merely half the coolant currently used i.e. 2.0 L/min with 10% mix ratio. Minimal use of coolant not only economically desirable for reducing manufacturing cost but also it imparts fewer hazards to human health. Also, sparing use of coolant will eventually transform the turning process into a more environment-conscious manufacturing process.[PUBLICATION ABSTRACT]
Tukey's Method of Multiple Comparison in the Randomized Blocks Model
In Kempthorne's randomization model for the randomized blocks design, the errors are correlated and are not assumed to have common variance. Under simple conditions, it is shown in this article that the exact level of confidence of Tukey's method of multiple comparison approaches the nominal level of confidence as the number of blocks increases. Monte Carlo results that illustrate this convergence are also presented.
Exact p-values for pairwise comparison of Friedman rank sums, with application to comparing classifiers
Background The Friedman rank sum test is a widely-used nonparametric method in computational biology. In addition to examining the overall null hypothesis of no significant difference among any of the rank sums, it is typically of interest to conduct pairwise comparison tests. Current approaches to such tests rely on large-sample approximations, due to the numerical complexity of computing the exact distribution. These approximate methods lead to inaccurate estimates in the tail of the distribution, which is most relevant for p -value calculation. Results We propose an efficient, combinatorial exact approach for calculating the probability mass distribution of the rank sum difference statistic for pairwise comparison of Friedman rank sums, and compare exact results with recommended asymptotic approximations. Whereas the chi-squared approximation performs inferiorly to exact computation overall, others, particularly the normal, perform well, except for the extreme tail. Hence exact calculation offers an improvement when small p -values occur following multiple testing correction. Exact inference also enhances the identification of significant differences whenever the observed values are close to the approximate critical value. We illustrate the proposed method in the context of biological machine learning, were Friedman rank sum difference tests are commonly used for the comparison of classifiers over multiple datasets. Conclusions We provide a computationally fast method to determine the exact p -value of the absolute rank sum difference of a pair of Friedman rank sums, making asymptotic tests obsolete. Calculation of exact p -values is easy to implement in statistical software and the implementation in R is provided in one of the Additional files and is also available at http://www.ru.nl/publish/pages/726696/friedmanrsd.zip .
Analytical hierarchy process: revolution and evolution
The Analytical Hierarchy Process (AHP) is a reliable, rigorous, and robust method for eliciting and quantifying subjective judgments in multi-criteria decision-making (MCDM). Despite the many benefits, the complications of the pairwise comparison process and the limitations of consistency in AHP are challenges that have been the subject of extensive research. AHP revolutionized how we resolve complex decision problems and has evolved substantially over three decades. We recap this evolution by introducing five new hybrid methods that combine AHP with popular weighting methods in MCDM. The proposed methods are described and evaluated systematically by implementing a widely used example in the AHP literature. We show that (i) the hybrid methods proposed in this study require fewer expert judgments than AHP but deliver the same ranking, (ii) a higher degree of involvement in the hybrid voting AHP methods leads to higher acceptability of the results when experts are also the decision-makers, and (iii) experts are more motivated and attentive in methods requiring fewer pairwise comparisons and less interaction, resulting in a more efficient process and higher acceptability.
Cluster-extent based thresholding in fMRI analyses: Pitfalls and recommendations
Cluster-extent based thresholding is currently the most popular method for multiple comparisons correction of statistical maps in neuroimaging studies, due to its high sensitivity to weak and diffuse signals. However, cluster-extent based thresholding provides low spatial specificity; researchers can only infer that there is signal somewhere within a significant cluster and cannot make inferences about the statistical significance of specific locations within the cluster. This poses a particular problem when one uses a liberal cluster-defining primary threshold (i.e., higher p-values), which often produces large clusters spanning multiple anatomical regions. In such cases, it is impossible to reliably infer which anatomical regions show true effects. From a survey of 814 functional magnetic resonance imaging (fMRI) studies published in 2010 and 2011, we show that the use of liberal primary thresholds (e.g., p<.01) is endemic, and that the largest determinant of the primary threshold level is the default option in the software used. We illustrate the problems with liberal primary thresholds using an fMRI dataset from our laboratory (N=33), and present simulations demonstrating the detrimental effects of liberal primary thresholds on false positives, localization, and interpretation of fMRI findings. To avoid these pitfalls, we recommend several analysis and reporting procedures, including 1) setting primary p<.001 as a default lower limit; 2) using more stringent primary thresholds or voxel-wise correction methods for highly powered studies; and 3) adopting reporting practices that make the level of spatial precision transparent to readers. We also suggest alternative and supplementary analysis methods. •Cluster-extent based thresholding is popular because of its high sensitivity.•However, cluster-extent based thresholding has several important problems.•One pitfall is low spatial specificity when significant clusters are large.•Another pitfall is increased false positives when a liberal primary threshold is used.•We recommend using stringent primary thresholds and augmented reporting procedures.