Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
1,629 result(s) for "Convergent validity"
Sort by:
The Arabic Version of the Impact of Event Scale-Revised: Psychometric Evaluation among Psychiatric Patients and the General Public within the Context of COVID-19 Outbreak and Quarantine as Collective Traumatic Events
The Coronavirus Disease-19 (COVID-19) pandemic has provoked the development of negative emotions in almost all societies since it first broke out in late 2019. The Impact of Event Scale-Revised (IES-R) is widely used to capture emotions, thoughts, and behaviors evoked by traumatic events, including COVID-19 as a collective and persistent traumatic event. However, there is less agreement on the structure of the IES-R, signifying a need for further investigation. This study aimed to evaluate the psychometric properties of the Arabic version of the IES-R among individuals in Saudi quarantine settings, psychiatric patients, and the general public during the COVID-19 outbreak. Exploratory factor analysis revealed that the items of the IES-R present five factors with eigenvalues > 1. Examination of several competing models through confirmatory factor analysis resulted in a best fit for a six-factor structure, which comprises avoidance, intrusion, numbing, hyperarousal, sleep problems, and irritability/dysphoria. Multigroup analysis supported the configural, metric, and scalar invariance of this model across groups of gender, age, and marital status. The IES-R significantly correlated with the Depression Anxiety Stress Scale-8, perceived health status, and perceived vulnerability to COVID-19, denoting good criterion validity. HTMT ratios of all the subscales were below 0.85, denoting good discriminant validity. The values of coefficient alpha in the three samples ranged between 0.90 and 0.93. In path analysis, correlated intrusion and hyperarousal had direct positive effects on avoidance, numbing, sleep, and irritability. Numbing and irritability mediated the indirect effects of intrusion and hyperarousal on sleep and avoidance. This result signifies that cognitive activation is the main factor driving the dynamics underlying the behavioral, emotional, and sleep symptoms of collective COVID-19 trauma. The findings support the robust validity of the Arabic IES-R, indicating it as a sound measure that can be applied to a wide range of traumatic experiences.
Reporting reliability, convergent and discriminant validity with structural equation modeling: A review and best-practice recommendations
Many constructs in management studies, such as perceptions, personalities, attitudes, and behavioral intentions, are not directly observable. Typically, empirical studies measure such constructs using established scales with multiple indicators. When the scales are used in a different population, the items are translated into other languages or revised to adapt to other populations, it is essential for researchers to report the quality of measurement scales before using them to test hypotheses. Researchers commonly report the quality of these measurement scales based on Cronbach’s alpha and confirmatory factor analysis results. However, these results are usually inadequate and sometimes inappropriate. Moreover, researchers rarely consider sampling errors for these psychometric quality measures. In this best practice paper, we first critically review the most frequently-used approaches in empirical studies to evaluate the quality of measurement scales when using structural equation modeling. Next, we recommend best practices in assessing reliability, convergent and discriminant validity based on multiple criteria and taking sampling errors into consideration. Then, we illustrate with numerical examples the application of a specifically-developed R package, measureQ, that provides a one-stop solution for implementing the recommended best practices and a template for reporting the results. measureQ is easy to implement, even for those new to R. Our overall aim is to provide a best-practice reference for future authors, reviewers, and editors in reporting and reviewing the quality of measurement scales in empirical management studies.
The Development and Validation of the Short Form of the Foreign Language Enjoyment Scale
We used a data set with n = 1,603 learners of foreign languages (FL) to develop and validate the short form of the Foreign Language Enjoyment Scale (S-FLES). The data was split into 2 groups, and we used the first sample to develop the short-form measure. A 3-factor hierarchical model of foreign language enjoyment (FLE) was uncovered, with FLE as a higher-order factor and with teacher appreciation, personal enjoyment, and social enjoyment as 3 lower-order factors. We selected 3 items for each of the 3 lower-order factors of the S-FLES. The proposed 9-item S-FLES was validated in the second sample, and the fit statistics for the factor structure indicated close fit. Further evidence was found to support the internal consistency, convergent validity, and discriminant validity of the S-FLES. The S-FLES provides a valid and reliable short-form measure of FLE, which can easily be included in any battery of assessments examining individual differences in FL learning.
Is Implicit Theory of Mind a Real and Robust Phenomenon? Results From a Systematic Replication Study
Recently, theory-of-mind research has been revolutionized by findings from novel implicit tasks suggesting that at least some aspects of false-belief reasoning develop earlier in ontogeny than previously assumed and operate automatically throughout adulthood. Although these findings are the empirical basis for far-reaching theories, systematic replications are still missing. This article reports a preregistered large-scale attempt to replicate four influential anticipatory-looking implicit theory-of-mind tasks using original stimuli and procedures. Results showed that only one of the four paradigms was reliably replicated. A second set of studies revealed, further, that this one paradigm was no longer replicated once confounds were removed, which calls its validity into question. There were also no correlations between paradigms, and thus, no evidence for their convergent validity. In conclusion, findings from anticipatory-looking false-belief paradigms seem less reliable and valid than previously assumed, thus limiting the conclusions that can be drawn from them.
Convergent validity assessment of formatively measured constructs in PLS-SEM
PurposeResearchers often use partial least squares structural equation modeling (PLS-SEM) to estimate path models that include formatively specified constructs. Their validation requires running a redundancy analysis, which tests whether the formatively measured construct is highly correlated with an alternative measure of the same construct. Extending prior knowledge in the field, this paper aims to examine the conditions favoring the use of single vs multiple items to measure the criterion construct in redundancy analyses.Design/methodology/approachMerging the literatures from a variety of fields, such as management, marketing and psychometrics, we first provide a theoretical comparison of single-item and multi-item measurement and offer guidelines for designing and validating suitable single items. An empirical comparison in the context of hospitality management examines whether using a single item to measure the criterion variable yields sufficient degrees of convergent validity compared to using a multi-item measure.FindingsThe results of an empirical comparison in the context of hospitality management show that, when the sample size is small, a single item yields higher degrees of convergent validity than a reflective construct does. However, larger sample sizes favor the use of reflectively measured multi-item constructs, but the differences are marginal, thus supporting the use of a global single item in PLS-SEM-based redundancy analyses.Originality/valueThis study is the first to research the efficacy of single-item versus multi-item measures in PLS-SEM-based redundancy analyses. The results illustrate that a convergent validity assessment of formatively measured constructs can be implemented without triggering a pronounced increase in survey length.
What do we know about the disruption index in scientometrics? An overview of the literature
The purpose of this paper is to provide a review of the literature on the original disruption index (DI 1 ) and its variants in scientometrics. The DI 1 has received much media attention and prompted a public debate about science policy implications, since a study published in Nature found that papers in all disciplines and patents are becoming less disruptive over time. This review explains in the first part the DI 1 and its variants in detail by examining their technical and theoretical properties. The remaining parts of the review are devoted to studies that examine the validity and the limitations of the indices. Particular focus is placed on (1) possible biases that affect disruption indices (2) the convergent and predictive validity of disruption scores, and (3) the comparative performance of the DI 1 and its variants. The review shows that, while the literature on convergent validity is not entirely conclusive, it is clear that some modified index variants, in particular DI 5 , show higher degrees of convergent validity than DI 1 . The literature draws attention to the fact that (some) disruption indices suffer from inconsistency, time-sensitive biases, and several data-induced biases. The limitations of disruption indices are highlighted and best practice guidelines are provided. The review encourages users of the index to inform about the variety of DI 1 variants and to apply the most appropriate variant. More research on the validity of disruption scores as well as a more precise understanding of disruption as a theoretical construct is needed before the indices can be used in the research evaluation practice.
Development and Validation of the Camouflaging Autistic Traits Questionnaire (CAT-Q)
There currently exist no self-report measures of social camouflaging behaviours (strategies used to compensate for or mask autistic characteristics during social interactions). The Camouflaging Autistic Traits Questionnaire (CAT-Q) was developed from autistic adults’ experiences of camouflaging, and was administered online to 354 autistic and 478 non-autistic adults. Exploratory factor analysis suggested three factors, comprising of 25 items in total. Good model fit was demonstrated through confirmatory factor analysis, with measurement invariance analyses demonstrating equivalent factor structures across gender and diagnostic group. Internal consistency (α = 0.94) and preliminary test–retest reliability (r = 0.77) were acceptable. Convergent validity was demonstrated through comparison with measures of autistic traits, wellbeing, anxiety, and depression. The present study provides robust psychometric support for the CAT-Q.
Risk Preference
Psychology offers conceptual and analytic tools that can advance the discussion on the nature of risk preference and its measurement in the behavioral sciences. We discuss the revealed and stated preference measurement traditions, which have coexisted in both psychology and economics in the study of risk preferences, and explore issues of temporal stability, convergent validity, and predictive validity with regard to measurement of risk preferences. As for temporal stability, do risk preference as a psychological trait show a degree of stability over time that approximates what has been established for other major traits, such as intelligence, or, alternatively, are they more similar in stability to transitory psychological states, such as emotional states? Convergent validity refers to the degree to which different measures of a psychological construct capture a common underlying characteristic or trait. Do measures of risk preference all capture a unitary psychological trait that is indicative of risky behavior across various domains, or do they capture various traits that independently contribute to risky behavior in specific areas of life, such as financial, health, and recreational domains? Predictive validity refers to the extent to which a psychological trait has power in forecasting behavior. Intelligence and major personality traits have been shown to predict important life outcomes, such as academic and professional achievement, which suggests there could be studies of the short- and long-term outcomes of risk preference— something lacking in current psychological (and economic) research. We discuss the current empirical knowledge on risk preferences in light of these considerations.
EXPLORATORY STRUCTURAL EQUATION MODELING IN SECOND LANGUAGE RESEARCH
This study offers methodological synergy in the examination of factorial structure in second language (L2) research. It illustrates the effectiveness and flexibility of the recently developed exploratory structural equation modeling (ESEM) method, which integrates the advantages of exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) into one complete measurement model. Two sets of data were collected using the L2 Passion Scale, which measures a dualistic model of passion. Study 1 participants were 220 L2 students. A comparison was made between the CFA and the ESEM models. The results demonstrated the superiority of the ESEM method relative to CFA in terms of better goodness-of-fit indices and realistic correlated factors. These results were replicated in another sample of 272 L2 students, providing support for the predictive validity using a structural ESEM model. Guidelines are provided and Mplus syntax files (codes) are included to help analysts apply the methods. We also make the data available publicly. Overall, this research demonstrated the usefulness of ESEM for examining the construct, discriminant, and convergent validity of L2 scales over CFA.
Resilience in Context: A Brief and Culturally Grounded Measure for Syrian Refugee and Jordanian Host-Community Adolescents
Validated measures are needed for assessing resilience in conflict settings. An Arabic version of the Child and Youth Resilience Measure (CYRM) was developed and tested in Jordan. Following qualitative work, surveys were implemented with male/female, refugee/nonrefugee samples (N = 603, 11-18 years). Confirmatory factor analyses tested three-factor structures for 28- and 12-item CYRMs and measurement equivalence across groups. CYRM-12 showed measurement reliability and face, content, construct (comparative fit index = .92-.98), and convergent validity. Gender-differentiated item loadings reflected resource access and social responsibilities. Resilience scores were inversely associated with mental health symptoms, and for Syrian refugees were unrelated to lifetime trauma exposure. In assessing individual, family, and community-level dimensions of resilience, the CYRM is a useful measure for research and practice with refugee and host-community youth.