Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectCountry Of PublicationPublisherSourceTarget AudienceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
192,359
result(s) for
"Analysis of Variance"
Sort by:
Intraclass correlation – A discussion and demonstration of basic features
by
Liljequist, David
,
Skavberg Roaldsen, Kirsti
,
Elfving, Britt
in
Analysis of variance
,
Bias
,
Computer simulation
2019
A re-analysis of intraclass correlation (ICC) theory is presented together with Monte Carlo simulations of ICC probability distributions. A partly revised and simplified theory of the single-score ICC is obtained, together with an alternative and simple recipe for its use in reliability studies. Our main, practical conclusion is that in the analysis of a reliability study it is neither necessary nor convenient to start from an initial choice of a specified statistical model. Rather, one may impartially use all three single-score ICC formulas. A near equality of the three ICC values indicates the absence of bias (systematic error), in which case the classical (one-way random) ICC may be used. A consistency ICC larger than absolute agreement ICC indicates the presence of non-negligible bias; if so, classical ICC is invalid and misleading. An F-test may be used to confirm whether biases are present. From the resulting model (without or with bias) variances and confidence intervals may then be calculated. In presence of bias, both absolute agreement ICC and consistency ICC should be reported, since they give different and complementary information about the reliability of the method. A clinical example with data from the literature is given.
Journal Article
Application of the Analysis of Variance (ANOVA) in the Interpretation of Power Transformer Faults
by
Thango, Bonginkosi A.
in
Analysis of variance
,
analysis of variance (ANOVA)
,
descriptive statistics
2022
Electrical power transformers are the most exorbitant and tactically prominent components of the South African electrical power grid. In contrast, they are burdened by internal winding faults predominantly on account of insulation system failure. It is essential that these faults must be swiftly and precisely uncovered and suitable measures should be adopted to separate the faulty unit from the entire system. The frequency response analysis (FRA) is a technique for tracking a transformer’s mechanical integrity. Nevertheless, classifying the category of the fault and its gravity by benchmarking measured FRA responses is still backbreaking and for the most part, anchored in personnel proficiency. This work presents a quantum leap to normalize the FRA interpretation procedure by suggesting an interpretation code criteria based on an empirical survey of transformers ranging from 315 kVA to 40 MVA. The study then proposes an analysis of variance (ANOVA) based interpretation tool for diagnosing the statistical significance of FRA fingerprint and measured profiles. The latter cannot be relied upon by an expert or by the naked eye. Additionally, descriptive FRA frequency sub-region data statistics are proposed to evaluate the shift in both the magnitude and measuring frequency characteristics to formulate the recommended interpretation code criteria. To corroborate the code criteria by incorporating ANOVA and descriptive statistics, the study presents various case studies with unknown FRA profiles for fault diagnosis. The results constitute proof of the reliability of the proposed code criteria and a proposed hybrid of ANOVA and descriptive statistics.
Journal Article
Non-normal Data in Repeated Measures ANOVA: Impact on Type I Error and Power
by
Alarcón, Rafael
,
Blanca, María
,
García-Castro, F.
in
Analysis of Variance
,
Between-subjects design
,
Computer Simulation
2023
Repeated measures designs are commonly used in health and social sciences research. Although there are other, more advanced, statistical analyses, the F-statistic of repeated measures analysis of variance (RM-ANOVA) remains the most widely used procedure for analyzing differences in means. The impact of the violation of normality has been extensively studied for between-subjects ANOVA, but this is not the case for RM-ANOVA. Therefore, studies that extensively and systematically analyze the robustness of RM-ANOVA under the violation of normality are needed. This paper reports the results of two simulation studies aimed at analyzing the Type I error and power of RM-ANOVA when the normality assumption is violated but sphericity is fulfilled.
Study 1 considered 20 distributions, both known and unknown, and we manipulated the number of repeated measures (3, 4, 6, and 8) and sample size (from 10 to 300). Study 2 involved unequal distributions in each repeated measure. The distributions analyzed represent slight, moderate, and severe deviation from normality.
Overall, the results show that the Type I error and power of the F-statistic are not altered by the violation of normality.
RM-ANOVA is generally robust to non-normality when the sphericity assumption is met.
Journal Article
Application of student's t-test, analysis of variance, and covariance
by
Pandey, Gaurav
,
Mishra, Priyadarshni
,
Mishra, Prabhaker
in
Analysis of covariance
,
Analysis of variance
,
Body mass index
2019
Student's t test (t test), analysis of variance (ANOVA), and analysis of covariance (ANCOVA) are statistical methods used in the testing of hypothesis for comparison of means between the groups. The Student's t test is used to compare the means between two groups, whereas ANOVA is used to compare the means among three or more groups. In ANOVA, first gets a common P value. A significant P value of the ANOVA test indicates for at least one pair, between which the mean difference was statistically significant. To identify that significant pair(s), we use multiple comparisons. In ANOVA, when using one categorical independent variable, it is called one-way ANOVA, whereas for two categorical independent variables, it is called two-way ANOVA. When using at least one covariate to adjust with dependent variable, ANOVA becomes ANCOVA. When the size of the sample is small, mean is very much affected by the outliers, so it is necessary to keep sufficient sample size while using these methods.
Journal Article
Ecologists should not use statistical significance tests to interpret simulation model results
by
Stier, Adrian C.
,
Rassweiler, Andrew
,
White, Crow
in
analysis of variance
,
Ecologists
,
Ecology
2014
Simulation models are widely used to represent the dynamics of ecological systems. A common question with such models is how changes to a parameter value or functional form in the model alter the results. Some authors have chosen to answer that question using frequentist statistical hypothesis tests (e.g. ANOVA). This is inappropriate for two reasons. First, p‐values are determined by statistical power (i.e. replication), which can be arbitrarily high in a simulation context, producing minuscule p‐values regardless of the effect size. Second, the null hypothesis of no difference between treatments (e.g. parameter values) is known a priori to be false, invalidating the premise of the test. Use of p‐values is troublesome (rather than simply irrelevant) because small p‐values lend a false sense of importance to observed differences. We argue that modelers should abandon this practice and focus on evaluating the magnitude of differences between simulations. Synthesis Researchers analyzing field or lab data often test ecological hypotheses using frequentist statistics (t‐tests, ANOVA, etc.) that focus on p‐values. Field and lab data usually have limited sample sizes, and p‐values are valuable for quantifying the probability of making incorrect inferences in that situation. However, modern ecologists increasingly rely on simulation models to address complex questions, and those who were trained in frequentist statistics often apply the hypothesis‐testing approach inappropriately to their simulation results. Our paper explains why p‐values are not informative for interpreting simulation models, and suggests better ways to evaluate the ecological significance of model results.
Journal Article
The vaccine hesitancy scale: Psychometric properties and validation
2018
The SAGE Working Group on Vaccine Hesitancy developed a vaccine hesitancy measure, the Vaccine Hesitancy Scale (VHS). This scale has the potential to aid in the advancement of research and immunization policy but has not yet been psychometrically evaluated.
Using a cross-sectional design, we collected self-reported survey data from a large national sample of Canadian parents from August to September 2016. An online questionnaire was completed in English or French. We used exploratory and confirmatory factor analysis to identify latent constructs underlying parents' responses to 10 VHS items (response scale 1–5, with higher scores indicating greater hesitancy). In addition to the VHS, measures included socio-demographics items, vaccine attitudes, parents’ human papillomavirus (HPV) vaccine decision-making stage, and vaccine refusal.
A total of 3779 Canadian parents completed the survey in English (74.1%) or French (25.9%). Exploratory and confirmatory factor analysis revealed a two-factor structure best explained the data, consisting of ‘lack of confidence’ (M = 1.98, SD = 0.72) and ‘risks’ (M = 3.07, SD = 0.95). Significant Pearson correlations were found between the scales and related vaccine attitudes. ANOVA analyses found significant differences in the VHS sub-scales by parents’ vaccine decision-making stages (p < .001). Independent samples t-tests found that the VHS sub-scales were associated with HPV vaccine refusal and refusing another vaccine (p < .001). Socio-demographic differences in the VHS were found; however, effect sizes were small (η2 < 0.02).
The VHS was found to have two factors that have construct and criterion validity in identifying vaccine hesitant parents. A limitation of the VHS was few items that loaded on the ‘risks’ component and a lack of positively and negatively worded items for both components. Based on these results, we suggest modifying the wording of some items and adding items on risk perceptions.
Journal Article
Hidden multiplicity in exploratory multiway ANOVA: Prevalence and remedies
by
van Ravenzwaaij, Don
,
Matzke, Dora
,
Wetzels, Ruud
in
Analysis of Variance
,
Behavioral Science and Psychology
,
Biomedical Research - standards
2016
Many psychologists do not realize that exploratory use of the popular multiway analysis of variance harbors a multiple-comparison problem. In the case of two factors, three separate null hypotheses are subject to test (i.e., two main effects and one interaction). Consequently, the probability of at least one Type I error (if all null hypotheses are true) is 14 % rather than 5 %, if the three tests are independent. We explain the multiple-comparison problem and demonstrate that researchers almost never correct for it. To mitigate the problem, we describe four remedies: the omnibus
F
test, control of the familywise error rate, control of the false discovery rate, and preregistration of the hypotheses.
Journal Article