Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
1,222 result(s) for "Classical test theory"
Sort by:
Classical Test Theory
Classical test theory (CTT) comprises a set of concepts and methods that provide a basis for many of the measurement tools currently used in health research. The assumptions and concepts underlying CTT are discussed. These include item and scale characteristics that derive from CTT as well as types of reliability and validity. Procedures commonly used in the development of scales under CTT are summarized, including factor analysis and the creation of scale scores. The advantages and disadvantages of CTT, its use across populations, and its continued use in the face of more recent measurement models are also discussed.
Development and Validation of the Coronary Heart Disease Scale Among the System of Quality of Life Instruments for Chronic Diseases QLICD-CHD (V2.0) Based on Classical Test Theory and Generalizability Theory
Coronary heart disease (CHD) is a common and frequent disease with a long and incurable course, and the quality of life of patients is severely reduced. This study was to develop and validate a quality of life scale for patients with CHD based on the Chinese context. The scale QLICD-CHD (V2.0) was developed based on the QLICD-CHD (V1.0), using a programmed decision procedures. Based on the data measuring QoL 3 times before and after treatments from 189 patients with CHD, the psychometric properties of the scale were evaluated with respect to validity, reliability and responsiveness employing correlation analysis, multi-trait scaling analysis, structural equation modeling, t-test and also G-study and D-study of generalizability theory analysis. The SF-36 scale was used as the criterion to evaluate the criterion-related validity. Paired t tests were conducted to evaluate the responsiveness on each domain/facet as well as the total of the scale, with Standardized Response Mean (SRM) being calculated. The QLICD-CHD (V2.0) has been developed with 42 items in 4 domains. The Cronbach's α of the general module, the specific module and the total scale were 0.91, 0.92 and 0.91 respectively. The overall score and the test-retest reliability coefficients in all domains are higher than 0.60, except for the specific module. Correlation and factor analysis confirmed good construct validity and criterion-related validity. After treatments, the overall score and score of all domains have statistically significant changes (P<0.01). The SRM of domain-level score ranges from 0.27 to 0.50. Generalizability Theory further confirm the reliability of the scale through more accurate variance component studies. The QLICD-CHD (V2.0) could be used as a useful instrument in assessing QoL for patients with CHD, with good psychometric properties.
The Fear of COVID-19 Scale: Development and Initial Validation
Background The emergence of the COVID-19 and its consequences has led to fears, worries, and anxiety among individuals worldwide. The present study developed the Fear of COVID-19 Scale (FCV-19S) to complement the clinical efforts in preventing the spread and treating of COVID-19 cases. Methods The sample comprised 717 Iranian participants. The items of the FCV-19S were constructed based on extensive review of existing scales on fears, expert evaluations, and participant interviews. Several psychometric tests were conducted to ascertain its reliability and validity properties. Results After panel review and corrected item-total correlation testing, seven items with acceptable corrected item-total correlation (0.47 to 0.56) were retained and further confirmed by significant and strong factor loadings (0.66 to 0.74). Also, other properties evaluated using both classical test theory and Rasch model were satisfactory on the seven-item scale. More specifically, reliability values such as internal consistency ( α = .82) and test–retest reliability (ICC = .72) were acceptable. Concurrent validity was supported by the Hospital Anxiety and Depression Scale (with depression, r = 0.425 and anxiety, r = 0.511) and the Perceived Vulnerability to Disease Scale (with perceived infectability, r = 0.483 and germ aversion, r = 0.459). Conclusion The Fear of COVID-19 Scale, a seven-item scale, has robust psychometric properties. It is reliable and valid in assessing fear of COVID-19 among the general population and will also be useful in allaying COVID-19 fears among individuals.
F10 Development of assessments for later stage huntington’s disease: HD structured interview of function and HD clinical status questionnaire
BackgroundThere is a need for validated assessments for patients with later-stage HD. The LSA study aims to provide preliminary clinimetric properties for two such measures: the HD Structured Interview of Function (HD-SIF) and HD Clinical Status Questionnaire (HDCSQ). Both assessments are administered to a Companion Participant either in-person or remotely, and the properties of these tests will be evaluated using the methods of Classical Test Theory (CTT) and Item Response Theory (IRT).ObjectivesTo obtain estimates for the clinimetric properties of the HD-SIF and HDCSQ.MethodsUp to 170 dyads of Manifest HD Gene Expansion Carrier (mHDGEC) Participants and their Companion Participants are planned to be enrolled in this study from approximately 20 English-speaking study sites. The study includes two sequential parts. In Part 1, we will use the methods of CTT to evaluate the HD-SIF, a structured interview designed to gather information for making ratings on the UHDRSTM ‘99 functional scales (TFC, FAS and IS). In Part 2, we will use the methods of CTT and IRT to assess the clinimetric properties of the HDCSQ, a questionnaire designed specifically to capture information on disease milestones that occur during the later stages of HD, and the HD-SIF. In both parts, Companion Participants will complete a Companion Information Form, a short questionnaire asking about the Companion Participant’s perceptions and experiences as a caregiver/companion to the mHDGEC Participant.Status and OutlookA robust suite of training materials have been developed to train and certify HD-SIF and HDCSQ raters. This study is entering into the final phase of start-up with recruitment scheduled to run from 3Q2021 until 2023. Preliminary results from Part 1 will be available during 2022 and a full report will be available later that year. Upon establishing the clinimetric properties of the scales, these assessments may be used for planning studies or incorporated into observational and interventional studies of HD. Including a more advanced patient population will empower them to participate and will promote their valued contribution to research.
Design and psychometric analysis of the hopelessness and suicide ideation inventory “IDIS”
The objective was to design the Hopeless and Suicide Ideation Inventory, also know as IDIS - Spanish acronym for Inventario de Desesperanza e Ideación Suicida - and to analyze its psychometric properties. A quantitative empirical research was conducted employing a non-experimental design, an instrumental variable and cross-sectional analysis. Three hundred and thirty-nine people participated in the study (67.6% females, 31.6% males), in which 54.6% were students and 34.8% were employees. Participants completed the IDIS, the Beck Depression Inventory (BDI-II), the Positive and Negative Suicide Ideation Inventory, and the Beck Hopelessness Scale. The results indicated an inter-rater reliability and a positive convergent validity in both scales. Suicidal ideation revealed an internal consistency of α = .76, and α = .81 for hopelessness; a total variance of 41.77% and 47.52% was obtained correspondingly. Based on the Item Response Theory (IRT), the adjustments for INFIT and OUTFIT fell under the expected range. It was concluded that the IDIS is a reliable and valid measure, however, further evaluations on sensitivity and specificity are encouraged
Overview of Classical Test Theory and Item Response Theory for the Quantitative Assessment of Items in Developing Patient-Reported Outcomes Measures
The US Food and Drug Administration’s guidance for industry document on patient-reported outcomes (PRO) defines content validity as “the extent to which the instrument measures the concept of interest” (FDA, 2009, p. 12). According to Strauss and Smith (2009), construct validity \"is now generally viewed as a unifying form of validity for psychological measurements, subsuming both content and criterion validity” (p. 7). Hence, both qualitative and quantitative information are essential in evaluating the validity of measures. We review classical test theory and item response theory (IRT) approaches to evaluating PRO measures, including frequency of responses to each category of the items in a multi-item scale, the distribution of scale scores, floor and ceiling effects, the relationship between item response options and the total score, and the extent to which hypothesized “difficulty” (severity) order of items is represented by observed responses. If a researcher has few qualitative data and wants to get preliminary information about the content validity of the instrument, then descriptive assessments using classical test theory should be the first step. As the sample size grows during subsequent stages of instrument development, confidence in the numerical estimates from Rasch and other IRT models (as well as those of classical test theory) would also grow. Classical test theory and IRT can be useful in providing a quantitative assessment of items and scales during the content-validity phase of PRO-measure development. Depending on the particular type of measure and the specific circumstances, the classical test theory and/or the IRT should be considered to help maximize the content validity of PRO measures.
TechCheck: Development and Validation of an Unplugged Assessment of Computational Thinking in Early Childhood Education
There is a need for developmentally appropriate Computational Thinking (CT) assessments that can be implemented in early childhood classrooms. We developed a new instrument called TechCheck for assessing CT skills in young children that does not require prior knowledge of computer programming. TechCheck is based on developmentally appropriate CT concepts and uses a multiple-choice “unplugged” format that allows it to be administered to whole classes or online settings in under 15 min. This design allows assessment of a broad range of abilities and avoids conflating coding with CT skills. We validated the instrument in a cohort of 5–9-year-old students ( N  = 768) participating in a research study involving a robotics coding curriculum. TechCheck showed good reliability and validity according to measures of classical test theory and item response theory. Discrimination between skill levels was adequate. Difficulty was suitable for first graders and low for second graders. The instrument showed differences in performance related to race/ethnicity. TechCheck scores correlated moderately with a previously validated CT assessment tool ( TACTIC-KIBO ). Overall, TechCheck has good psychometric properties, is easy to administer and score, and discriminates between children of different CT abilities. Implications, limitations, and directions for future work are discussed.
The Psychometric Properties of the Chinese eHealth Literacy Scale (C-eHEALS) in a Chinese Rural Population: Cross-Sectional Validation Study
The eHealth Literacy Scale (eHEALS) is the most widely used instrument in health studies to measure individual's electronic health literacy. Nonetheless, despite the rapid development of the online medical industry and increased rural-urban disparities in China, very few studies have examined the characteristics of the eHEALS among Chinese rural people by using modern psychometric methods. This study evaluated the psychometric properties of eHEALS in a Chinese rural population by using both the classical test theory and item response theory methods. This study aimed to develop a simplified Chinese version of the eHEALS (C-eHEALS) and evaluate its psychometric properties in a rural population. A cross-sectional survey was conducted with 543 rural internet users in West China. The internal reliability was assessed using the Cronbach alpha coefficient. A one-factor structure of the C-eHEALS was obtained via principal component analysis, and fit indices for this structure were calculated using confirmatory factory analysis. Subsequently, the item discrimination, difficulty, and test information were estimated via the graded response model. Additionally, the criterion validity was confirmed through hypothesis testing. The C-eHEALS has good reliability. Both principal component analysis and confirmatory factory analysis showed that the scale has a one-factor structure. The graded response model revealed that all items of the C-eHEALS have response options that allow for differentiation between latent trait levels and the capture of substantial information regarding participants' ability. The findings indicate the high reliability and validity of the C-eHEALS and thus recommend its use for measuring eHealth literacy among the Chinese rural population.