Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
11,703 result(s) for "Test validity and reliability"
Sort by:
Experienced incivility in the workplace: A meta-analytical review of its construct validity and nomological network
Although workplace incivility has received increasing attention in organizational research over the past two decades, there have been recurring questions about its construct validity, especially vis-à-vis other forms of workplace mistreatment. Also, the antecedents of experienced incivility remain understudied, leaving an incomplete understanding of its nomological network. In this meta-analysis using Schmidt and Hunter's [Methods of meta-analysis: Correcting error and bias in research findings (3rd ed.), Sage] random-effect meta-analytic methods, we validate the construct of incivility by testing its reliability, convergent and discriminant validity, as well as its incremental predictive validity over other forms of mistreatment. We also extend its nomological network by drawing on the perpetrator predation framework to systematically study the antecedents of experienced incivility. Based on 105 independent samples and 51,008 participants, we find extensive support for incivility's construct validity. Besides, we demonstrate that demographic characteristics (gender, race, rank, and tenure), personality traits (agreeableness, conscientiousness, neuroticism, negative affectivity, and self-esteem), and contextual factors (perceived uncivil climate and socially supportive climate) are important antecedents of experienced incivility, with contextual factors displaying a stronger association with incivility. In a supplementary primary study with 457 participants, we find further support for the construct validity of incivility. We discuss the theoretical and practical implications of this study. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Heuristics versus statistics in discriminant validity testing: a comparison of four procedures
Purpose The purpose of this paper is to review and extend recent simulation studies on discriminant validity measures, contrasting the use of cutoff values (i.e. heuristics) with inferential tests. Design/methodology/approach Based on a simulation study, which considers different construct correlations, sample sizes, numbers of indicators and loading patterns, the authors assess each criterion’s sensitivity to type I and type II errors. Findings The findings of the simulation study provide further evidence for the robustness of the heterotrait–monotrait (HTMT) ratio of correlations criterion as an estimator of disattenuated (perfectly reliable) correlations between constructs, whose performance parallels that of the standard constrained PHI approach. Furthermore, the authors identify situations in which both methods fail and suggest an alternative criterion. Originality/value Addressing the limitations of prior simulation studies, the authors use both directional comparisons (i.e. heuristics) and inferential tests to facilitate the comparison of the HTMT and PHI methods. Furthermore, the simulation considers criteria that have not been assessed in prior research.
Development and Validation of a Brief Version of the Difficulties in Emotion Regulation Scale: The DERS-16
The Difficulties in Emotion Regulation Scale (DERS) is a widely-used, theoretically-driven, and psychometrically-sound self-report measure of emotion regulation difficulties. However, at 36-items, the DERS may be challenging to administer in some situations or settings (e.g., in the course of patient care or large-scale epidemiological studies). Consequently, there is a need for a briefer version of the DERS. The goal of the present studies was to develop and evaluate a 16-item version of the DERS – the DERS-16. The reliability and validity of the DERS-16 were examined in a clinical sample (N = 96) and two large community samples (Ns = 102 and 482). The validity of the DERS-16 was evaluated comparing the relative strength of the association of the two versions of the DERS with measures of emotion regulation and related constructs, psychopathology, and clinically-relevant behaviors theorized to stem from emotion regulation deficits. Results demonstrate that the DERS-16 has retained excellent internal consistency, good test-retest reliability, and good convergent and discriminant validity. Further, the DERS-16 showed minimal differences in its convergent and discriminant validity with relevant measures when compared to the original DERS. In conclusion, the DERS-16 offers a valid and brief method for the assessment of overall emotion regulation difficulties.
Cronbach's alpha reliability: Interval estimation, hypothesis testing, and sample size planning
Cronbach’s alpha is one of the most widely used measures of reliability in the social and organizational sciences. Current practice is to report the sample value of Cronbach’s alpha reliability, but a confidence interval for the population reliability value also should be reported. The traditional confidence interval for the population value of Cronbach’s alpha makes an unnecessarily restrictive assumption that the multiple measurements have equal variances and equal covariances. We propose a confidence interval that does not require equal variances or equal covariances. The results of a simulation study demonstrated that the proposed method performed better than alternative methods. We also present some sample size formulas that approximate the sample size requirements for desired power or desired confidence interval precision. R functions are provided that can be used to implement the proposed confidence interval and sample size methods.
Development and Validation of the Camouflaging Autistic Traits Questionnaire (CAT-Q)
There currently exist no self-report measures of social camouflaging behaviours (strategies used to compensate for or mask autistic characteristics during social interactions). The Camouflaging Autistic Traits Questionnaire (CAT-Q) was developed from autistic adults’ experiences of camouflaging, and was administered online to 354 autistic and 478 non-autistic adults. Exploratory factor analysis suggested three factors, comprising of 25 items in total. Good model fit was demonstrated through confirmatory factor analysis, with measurement invariance analyses demonstrating equivalent factor structures across gender and diagnostic group. Internal consistency (α = 0.94) and preliminary test–retest reliability (r = 0.77) were acceptable. Convergent validity was demonstrated through comparison with measures of autistic traits, wellbeing, anxiety, and depression. The present study provides robust psychometric support for the CAT-Q.
An Equivalence Approach to Balance and Placebo Tests
Recent emphasis on credible causal designs has led to the expectation that scholars justify their research designs by testing the plausibility of their causal identification assumptions, often through balance and placebo tests. Yet current practice is to use statistical tests with an inappropriate null hypothesis of no difference, which can result in equating nonsignificant differences with significant homogeneity. Instead, we argue that researchers should begin with the initial hypothesis that the data are inconsistent with a valid research design, and provide sufficient statistical evidence in favor of a valid design. When tests are correctly specified so that difference is the null and equivalence is the alternative, the problems afflicting traditional tests are alleviated. We argue that equivalence tests are better able to incorporate substantive considerations about what constitutes good balance on covariates and placebo outcomes than traditional tests. We demonstrate these advantages with applications to natural experiments.
Validating the Interpretations and Uses of Test Scores
To validate an interpretation or use of test scores is to evaluate the plausibility of the claims based on the scores. An argument-based approach to validation suggests that the claims based on the test scores be outlined as an argument that specifies the inferences and supporting assumptions needed to get from test responses to score-based interpretations and uses. Validation then can be thought of as an evaluation of the coherence and completeness of this interpretation/use argument and of the plausibility of its inferences and assumptions. In outlining the argument-based approach to validation, this paper makes eight general points. First, it is the proposed score interpretations and uses that are validated and not the test or the test scores. Second, the validity of a proposed interpretation or use depends on how well the evidence supports the claims being made. Third, more-ambitious claims require more support than less-ambitious claims. Fourth, more-ambitious claims (e.g., construct interpretations) tend to be more useful than less-ambitious claims, but they are also harder to validate. Fifth, interpretations and uses can change over time in response to new needs and new understandings leading to changes in the evidence needed for validation. Sixth, the evaluation of score uses requires an evaluation of the consequences of the proposed uses; negative consequences can render a score use unacceptable. Seventh, the rejection of a score use does not necessarily invalidate a prior, underlying score interpretation. Eighth, the validation of the score interpretation on which a score use is based does not validate the score use.
A Multidimensional Tool Based on the eHealth Literacy Framework: Development and Initial Validity Testing of the eHealth Literacy Questionnaire (eHLQ)
For people to be able to access, understand, and benefit from the increasing digitalization of health services, it is critical that services are provided in a way that meets the user's needs, resources, and competence. The objective of the study was to develop a questionnaire that captures the 7-dimensional eHealth Literacy Framework (eHLF). Draft items were created in parallel in English and Danish. The items were generated from 450 statements collected during the conceptual development of eHLF. In all, 57 items (7 to 9 items per scale) were generated and adjusted after cognitive testing. Items were tested in 475 people recruited from settings in which the scale was intended to be used (community and health care settings) and including people with a range of chronic conditions. Measurement properties were assessed using approaches from item response theory (IRT) and classical test theory (CTT) such as confirmatory factor analysis (CFA) and reliability using composite scale reliability (CSR); potential bias due to age and sex was evaluated using differential item functioning (DIF). CFA confirmed the presence of the 7 a priori dimensions of eHLF. Following item analysis, a 35-item 7-scale questionnaire was constructed, covering (1) using technology to process health information (5 items, CSR=.84), (2) understanding of health concepts and language (5 items, CSR=.75), (3) ability to actively engage with digital services (5 items, CSR=.86), (4) feel safe and in control (5 items, CSR=.87), (5) motivated to engage with digital services (5 items, CSR=.84), (6) access to digital services that work (6 items, CSR=.77), and (7) digital services that suit individual needs (4 items, CSR=.85). A 7-factor CFA model, using small-variance priors for cross-loadings and residual correlations, had a satisfactory fit (posterior productive P value: .27, 95% CI for the difference between the observed and replicated chi-square values: -63.7 to 133.8). The CFA showed that all items loaded strongly on their respective factors. The IRT analysis showed that no items were found to have disordered thresholds. For most scales, discriminant validity was acceptable; however, 2 pairs of dimensions were highly correlated; dimensions 1 and 5 (r=.95), and dimensions 6 and 7 (r=.96). All dimensions were retained because of strong content differentiation and potential causal relationships between these dimensions. There is no evidence of DIF. The eHealth Literacy Questionnaire (eHLQ) is a multidimensional tool based on a well-defined a priori eHLF framework with robust properties. It has satisfactory evidence of construct validity and reliable measurement across a broad range of concepts (using both CTT and IRT traditions) in various groups. It is designed to be used to understand and evaluate people's interaction with digital health services.
Behavioral Public Administration: Combining Insights from Public Administration and Psychology
Behavioral public administration is the analysis of public administration from the micro-level perspective of individual behavior and attitudes by drawing on insights from psychology on the behavior of individuals and groups. The authors discuss how scholars in public administration currently draw on theories and methods from psychology and related fields and point to research in public administration that could benefit from further integration. An analysis of public administration topics through a psychological lens can be useful to confirm, add nuance to, or extend classical public administration theories. As such, behavioral public administration complements traditional public administration. Furthermore, it could be a two-way street for psychologists who want to test the external validity of their theories in a political-administrative setting. Finally, four principles are proposed to narrow the gap between public administration and psychology.
Reliability and validity of the Chinese version of the Childhood Trauma Questionnaire-Short Form for inpatients with schizophrenia
The evaluation of childhood trauma is essential for the treatment of schizophrenia. The short form of Childhood Trauma Questionnaire (CTQ-SF) is a widely used measure of the experience of childhood trauma in the general population. Nevertheless, data regarding the psychometric property of CTQ-SF for assessing childhood trauma of patients with schizophrenia are very limited. Two hundred Chinese inpatients with schizophrenia completed the Chinese CTQ-SF, the Child Psychological Maltreatment Scale (CPMS), the Impact of Events Scale-Revised (IES-R), and the Dissociative Experiences Scale-II (DES-II). To assess test-retest reliability of the CTQ-SF, all patients completed the CTQ-SF again two weeks later. Concurrent and convergent validity was assessed by analyzing Pearson bivariate correlation coefficients between CTQ-SF and CPMS, IES-R, and DES-II. The Cronbach's α coefficient of the Chinese CTQ-SF was 0.81, and the two-week re-test reliability was 0.81 (P<0.01). The criterion-related validity coefficients of CTQ-SF with the CMPS, IES-R and DES-II were 0.61, 0.41, and 0.51, respectively. The Chinese CTQ-SF has satisfactory psychometric properties to measure childhood abuse or neglect in Chinese inpatients with schizophrenia.