Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
4,890 result(s) for "Cronbach"
Sort by:
Part II: On the Use, the Misuse, and the Very Limited Usefulness of Cronbach’s Alpha: Discussing Lower Bounds and Correlated Errors
Prior to discussing and challenging two criticisms on coefficient α , the well-known lower bound to test-score reliability, we discuss classical test theory and the theory of coefficient α . The first criticism expressed in the psychometrics literature is that coefficient α is only useful when the model of essential τ -equivalence is consistent with the item-score data. Because this model is highly restrictive, coefficient α is smaller than test-score reliability and one should not use it. We argue that lower bounds are useful when they assess product quality features, such as a test-score’s reliability. The second criticism expressed is that coefficient α incorrectly ignores correlated errors. If correlated errors would enter the computation of coefficient α , theoretical values of coefficient α could be greater than the test-score reliability. Because quality measures that are systematically too high are undesirable, critics dismiss coefficient α . We argue that introducing correlated errors is inconsistent with the derivation of the lower bound theorem and that the properties of coefficient α remain intact when data contain correlated errors.
A Review on Sample Size Determination for Cronbach’s Alpha Test: A Simple Guide for Researchers
Reliability studies are commonly used in questionnaire development studies and questionnaire validation studies. This study reviews the sample size guideline for Cronbach's alpha test. Manual sample size calculation using Microsoft Excel software and sample size tables were tabulated based on a single coefficient alpha and the comparison of two coefficients alpha. For a single coefficient alpha test, the approach by assuming the Cronbach's alpha coefficient equals to zero in the null hypothesis will yield a smaller sample size of less than 30 to achieve a minimum desired effect size of 0.7. However, setting the coefficient of Cronbach's alpha larger than zero in the null hypothesis could be necessary and this will yield larger sample size. For comparison of two coefficients of Cronbach's alpha, a larger sample size is needed when testing for smaller effect sizes. In the assessment of the internal consistency of an instrument, the present study proposed the Cronbach's alpha's coefficient to be set at 0.5 in the null hypothesis and hence larger sample size is needed. For comparison of two coefficients' of Cronbach's alpha, justification is needed whether testing for extremely low and extremely large effect sizes are scientifically necessary.
Beyond playing 20 questions with nature: Integrative experiment design in the social and behavioral sciences
The dominant paradigm of experiments in the social and behavioral sciences views an experiment as a test of a theory, where the theory is assumed to generalize beyond the experiment's specific conditions. According to this view, which Alan Newell once characterized as “playing twenty questions with nature,” theory is advanced one experiment at a time, and the integration of disparate findings is assumed to happen via the scientific publishing process. In this article, we argue that the process of integration is at best inefficient, and at worst it does not, in fact, occur. We further show that the challenge of integration cannot be adequately addressed by recently proposed reforms that focus on the reliability and replicability of individual findings, nor simply by conducting more or larger experiments. Rather, the problem arises from the imprecise nature of social and behavioral theories and, consequently, a lack of commensurability across experiments conducted under different conditions. Therefore, researchers must fundamentally rethink how they design experiments and how the experiments relate to theory. We specifically describe an alternative framework, integrative experiment design, which intrinsically promotes commensurability and continuous integration of knowledge. In this paradigm, researchers explicitly map the design space of possible experiments associated with a given research question, embracing many potentially relevant theories rather than focusing on just one. Researchers then iteratively generate theories and test them with experiments explicitly sampled from the design space, allowing results to be integrated across experiments. Given recent methodological and technological developments, we conclude that this approach is feasible and would generate more-reliable, more-cumulative empirical and theoretical knowledge than the current paradigm – and with far greater efficiency.
Revisiting the theoretical and methodological foundations of depression measurement
Depressive disorders are among the leading causes of global disease burden, but there has been limited progress in understanding the causes and treatments for these disorders. In this Perspective, we suggest that such progress crucially depends on our ability to measure depression. We review the many problems with depression measurement, including limited evidence of validity and reliability. These issues raise grave concerns about common uses of depression measures, such as diagnosis or tracking treatment progress. We argue that shortcomings arise because depression measurement rests on shaky methodological and theoretical foundations. Moving forward, we need to break with the field's tradition that has, for decades, divorced theories about depression from how we measure it. Instead, we suggest that epistemic iteration, an iterative exchange between theory and measurement, provides a crucial avenue for depression measurement to progress.
A journey around alpha and omega to estimate internal consistency reliability
Based on recent psychometric developments, this paper presents a conceptual and practical guide for estimating internal consistency reliability of measures obtained as item sum or mean. The internal consistency reliability coefficient is presented as a by-product of the measurement model underlying the item responses. A three-step procedure is proposed for its estimation, including descriptive data analysis, test of relevant measurement models, and computation of internal consistency coefficient and its confidence interval. Provided formulas include: (a) Cronbach’s alpha and omega coefficients for unidimensional measures with quantitative item response scales, (b) coefficients ordinal omega, ordinal alpha and nonlinear reliability for unidimensional measures with dichotomic and ordinal items, (c) coefficients omega and omega hierarchical for essentially unidimensional scales presenting method effects. The procedure is generalized to weighted sum measures, multidimensional scales, complex designs with multilevel and/or missing data and to scale development. Four illustrative numerical examples are fully explained and the data and the R syntax are provided.
Neither Cronbach’s Alpha nor McDonald’s Omega: A Commentary on Sijtsma and Pfadt
Sijtsma and Pfadt ( 2021 ) published a thought-provoking article on coefficient alpha. I make the following arguments against their work. 1) Kuder and Richardson (1937) deserve more credit for coefficient alpha than Cronbach ( 1951 ). 2) We should distinguish between the definition of reliability and its meaning. 3) We should be wary of overfitting in the use of FA reliability. 4) Our primary concern is to obtain accurate reliability estimates rather than conservative estimates. 5) Several reliability estimators, such as λ 2 , μ 2 , congeneric reliability and the Gilmer-Feldt coefficient are more accurate than coefficient alpha. 6) The name omega should not be used to refer to a specific reliability estimator.
Coefficient Alpha: The Resistance of a Classic
During the 20th century the alpha coefficient (α) was widely used in the estimation of the internal consistency reliability of test scores. After misuses were identified in the early 21st century alternatives became widespread, especially the omega coefficient (ω). Nowadays, α is re-emerging as an acceptable option for reliability estimation. A review of the recent academic contributions, journal publication habits and recommendations from normative texts was carried out to identify good practices in estimation of internal consistency reliability. To guide the analysis, we propose a three-phase decision diagram, which includes item description, fit of the measurement model for the test, and choice of the reliability coefficient for test score(s). We also provide recommendations on the use of R, Jamovi, JASP, Mplus, SPSS and Stata software to perform the analysis. Both α and ω are suitable for items with approximately normal distributions and approximately unidimensional and congeneric measures without extreme factor loadings. When items show non-normal distributions, strong specific components, or correlated errors, variants of ω are more appropriate. Some require specific data gathering designs. On a practical level we recommend a critical approach when using the software.
Diseño y validez de un instrumento: prueba de lanzamiento y recepción con salto cruzado (rcht&c) para evaluar la competencia motriz en atletismo (Design and validity of an instrument - run cross hopping throw and catch
The aim of this study was to design and validate a tool for assessing motor competence (MC) and detecting talent in children aged between 6 and 10 years in athletics. Ten experts were carefully selected to collaborate in the validation. Cronbach's alpha and intraclass correlation coefficient (ICC) were used, respectively, to check for construct's validity and to reliability. Lin's Concordance Correlation Coefficient (CCC) was calculated as a complementary test. Additionally, the Aiken's V value was used to validate the tool. ICC (0.855) and Cronbach's alpha (0.922) showed acceptable reliability and consistency, respectively, and Lin's CCC (0.786) indicated excellent reproducibility, thus proving stability and consistency over a two-week period. Aiken's V value was 0.92, confirming the validity of the test. By parameter, univocity had Aiken's V of 0.92, relevance of 0.91, and importance of 0.91. Therefore, we conclude that this tool can be a valid test for assessing MC in athletics.
Validity and Reliability of the Indonesian Version of Nordic Occupational Skin Questionnaire 2002 (NOSQ-2002)
Background: Epidemiological data on occupational hand eczema in Indonesia is still limited, partly because there are no valid and reliable instruments in the Indonesian language as a means of survey. This study aims to translate the Nordic Occupational Skin Questionnaire 2002 (NOSQ-2002) into the Indonesian language and assess the validity and reliability of the Indonesian NOSQ-2002 as an instrument for epidemiological surveys and screening of occupational hand eczema. Methods: The original English version of NOSQ-2002 was translated into the Indonesian language following the standard procedure of translation. The Indonesian NOSQ-2002 version, which had been collectively approved, was subsequently completed by a group of 194 textile employees from PT. Panca Persada Mulia-PANDATEX in Magelang, Central Java, Indonesia. Validity was assessed using Pearson correlation for each question with the total score. Reliability was evaluated using Cronbach's alpha values. The sensitivity and specificity of the Indonesian NOSQ-2002 screening were determined by comparing results with those of examination by the examining physician as the gold standard. Results: Pearson correlation values for each question range from 0.252 to 0.905, all surpassing the r table value, indicating questions in NOSQ-2002 are valid. Reliability NOSQ-2002 rated good (reliable), with a Cronbach's alpha of 0.933. The Indonesian NOSQ-2002 version demonstrates a sensitivity of 93.3% and specificity of 98.8% for screening occupational hand eczema. Conclusion: The Indonesian version of NOSQ-2002 is a valid and reliable instrument for use in epidemiological surveys and screening of occupational hand eczema.
Development and Validation of Knowledge, Attitude, and Practice Questionnaire: Toward Safe Working in Confined Spaces
Confined space workers do a wide range of tasks, many of which have a significant risk of hazardous exposure. Hence, a reliable and valid questionnaire is important in assessing the knowledge, attitude, and practice (KAP) of workers in this field. The present study was conducted to develop and validate a questionnaire that could assess the KAP for safe working in a confined space. The questionnaire went through a development and validation process. The development stage consisted of a literature review, expert’s opinion, and evaluation by experts in the field via cognitive debriefing. The validation stage encompassed exploratory and confirmatory parts to investigate the psychometric properties of the questionnaire. A total of 350 participants were recruited among confined space workers from two oil and gas companies in Malaysia. The two-parameter logistic item response theory (2-PL IRT) analysis was used for the knowledge section. Exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) were used in the attitude and practice sections of the validation stage. The development stage resulted in 30 items for knowledge, attitude, and practice sections. Items in the knowledge section showed an acceptable difficulty and discrimination, as noted during the 2-PL IRT analysis. The EFA resulted in a one-factor model for attitude and practice sections, and contained 18 items, with factor loading > 0.4. The Cronbach’s alpha was 0.804 and 0.917 for attitude and practice sections, respectively. The CFA for attitude and practice sections indicated a good model fitness (Raykov’s rho = 0.814 and 0.912, respectively). All items indicated good reliability and valid psychometrics for determining KAP on safe working in a confined space.