Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
22,082 result(s) for "Test Reliability"
Sort by:
Brief Report: Examining Test-Retest Reliability of the Autism Diagnostic Observation Schedule (ADOS-2) Calibrated Severity Scores (CSS)
Describing the relative severity and change in autism symptoms is crucial for the appropriate characterization of clinical and research populations. The calibrated severity score (CSS) of the Autism Diagnostic Observation Schedule-2 (ADOS-2; Lord et al., 2012) was created to better describe autism symptom severity consistently across different ages and language levels. The CSS has been widely used to quantify and compare symptom severity on a 10-point scale across Modules; however, its test re-test reliability has not been studied. With 608 ADOS observations, we showed strong test re-test reliability of the CSS across all ADOS Modules. The results support the use of the ADOS CSS as a reliable tool to quantify autism symptom severity across development.
Psychometric Properties of the Autism-Spectrum Quotient for Assessing Low and High Levels of Autistic Traits in College Students
The current study systematically investigated the effects of scoring and categorization methods on the psychometric properties of the Autism-Spectrum Quotient. Four hundred and three college students completed the Autism-Spectrum Quotient at least once. Total scores on the Autism-Spectrum Quotient had acceptable internal consistency and test–retest reliability using a binary or Likert scoring method, but the results were more varied for the subscales. Overall, Likert scoring yielded higher internal consistency and test–retest reliability than binary scoring. However, agreement in categorization of low and high autistic traits was poor over time (except for a median split on Likert scores). The results support using Likert scoring and administering the Autism-Spectrum Quotient at the same time as the task of interest with neurotypical participants.
Test of Gross Motor Development-3 (TGMD-3) with the Use of Visual Supports for Children with Autism Spectrum Disorder: Validity and Reliability
The validity and reliability of the Test of Gross Motor Development-3 (TGMD-3) were measured, taking into consideration the preference for visual learning of children with autism spectrum disorder (ASD). The TGMD-3 was administered to 14 children with ASD (4–10 years) and 21 age-matched typically developing children under two conditions: TGMD-3 traditional protocol, and TGMD-3 visual support protocol. Excellent levels of internal consistency, test–retest, interrater and intrarater reliability were achieved for the TGMD-3 visual support protocol. TGMD-3 raw scores of children with ASD were significantly lower than typically developing peers, however, significantly improved using the TGMD-3 visual support protocol. This demonstrates that the TGMD-3 visual support protocol is a valid and reliable assessment of gross motor performance for children with ASD.
The Ritvo Autism Asperger Diagnostic Scale-Revised (RAADS-R): A Scale to Assist the Diagnosis of Autism Spectrum Disorder in Adults: An International Validation Study
The Ritvo Autism Asperger Diagnostic Scale-Revised (RAADS-R) is a valid and reliable instrument to assist the diagnosis of adults with Autism Spectrum Disorders (ASD). The 80-question scale was administered to 779 subjects (201 ASD and 578 comparisons). All ASD subjects met inclusion criteria: DSM-IV-TR, ADI/ADOS diagnoses and standardized IQ testing. Mean scores for each of the questions and total mean ASD vs. the comparison groups’ scores were significantly different ( p  < .0001). Concurrent validity with Constantino Social Responsiveness Scale-Adult = 95.59%. Sensitivity = 97%, specificity = 100%, test–retest reliability r  = .987. Cronbach alpha coefficients for the subscales and 4 derived factors were good. We conclude that the RAADS-R is a useful adjunct diagnostic tool for adults with ASD.
The Reliability and Validity of the Mandarin Chinese Version of the Vocal Fatigue Index: Preliminary Validation
Purpose: This study attempted to develop and to preliminarily validate the Mandarin Chinese version of the Vocal Fatigue Index (VFI) as a standardized self-assessment questionnaire tool for potential clinical applications. Method: The experimental procedure involved (a) cross-cultural adaptation of the VFI into the Mandarin Chinese version (CVFI), (b) evaluation by an expert panel, (c) back translation, (d) pilot testing, and (e) validation of the questionnaire by three participant groups: 50 with voice disorders, 50 occupational voice users (at-risk group), and 50 with normal voice (control group). Internal consistency, test--retest reliability, content validity, and convergent validity of the CVFI were examined, and discriminatory ability (diagnostic accuracy) for distinguishing between the groups was evaluated. Results: Results showed high internal consistency (Cronbach's alpha [greater than or equal to] 0.8817 for the total CVFI scores for all groups), high test-retest reliability (intraclass correlation coefficients [greater than or equal to] 0.9072, p < 0.001 for the total CVFI scores for all groups), high content validity (total content validity index = 0.9368), and high convergent validity (Pearson r [greater than or equal to] 0.8155, p < 0.001 between the total CVFI scores and Factors 1 and 2 scores). Significant differences between the three groups were found in all scores. Receiver operating characteristic analysis revealed a high diagnostic accuracy for distinguishing between the disorders group and the normal group (area under the curve [greater than or equal to] 0.927, p < 0.001 for the total CVFI scores and Factors 1 and 2 scores), with cutoff scores of [greater than or equal to] 36 (total CVFI score), [greater than or equal to] 23.5 (Factor 1 score), [greater than or equal to] 7.5 (Factor 2 score), and [greater than or equal to] 6.5 (Factor 3 score). Conclusions: These findings suggested that the CVFI could be a reliable and valid self-assessment tool for the clinical evaluation of vocal fatigue in Mandarin Chinese-speaking populations. A full-scale validation study of the CVFI is recommended to verify these results.
Improving the Efficiency of the Digits-in-Noise Hearing Screening Test: A Comparison Between Four Different Test Procedures
Purpose: This study compared the test characteristics, test-retest reliability, and test efficiency of three novel digits-in-noise (DIN) test procedures to a conventional antiphasic 23-trial adaptive DIN (D23). Method: One hundred twenty participants with an average age of 42 years (SD = 19) were included. Participants were tested and retested with four different DIN procedures. Three new DIN procedures were compared to the reference D23 version: (1) a self-selected DIN (DSS) to allow participants to indicate a subjective speech recognition threshold (SRT); (2) a combination of self-selected and adaptive eight-trial DIN (DC8) that utilized a self-selected signal-to-noise ratio (SNR) followed by an eight-trial adaptive DIN procedure; and (3) a fixed SNR DIN (DF) approach using a fixed SNR value for all presentations to produce a pass/fail test result. Results: Test-retest reliability of the D23 procedure was better than that of the DSS and DC8 procedures. SRTs from DSS and DC8 were significantly higher than SRTs from D23. DSS was not accurate to discriminate between normal-hearing and hard of hearing listeners. The DF and DC8 procedures with an adapted cutoff showed good hearing screening test characteristics. All three novel DIN procedure durations were significantly shorter (< 70 s) than that of D23. DF showed a reduction of 46% in the number of presentations compared to D23 (from 23 presentations to an average of 12.5). Conclusions: The DF and DC8 procedures had significantly lower test durations than the reference D23 and show potential to be more time-efficient screening tools to determine normal hearing or potential hearing loss. Further studies are needed to optimize the DC8 procedure. The reference D23 remains the most reliable and accurate DIN hearing screening test, but studies in which the potentially efficient new DIN procedures are compared to pure-tone thresholds are needed to validate these procedures.
The Autism Spectrum Quotient: Children’s Version (AQ-Child)
The Autism Spectrum Quotient—Children’s Version (AQ-Child) is a parent-report questionnaire that aims to quantify autistic traits in children 4–11 years old. The range of scores on the AQ-Child is 0–150. It was administered to children with an autism spectrum condition (ASC) ( n  = 540) and a general population sample ( n  = 1,225). Results showed a significant difference in scores between those with an ASC diagnosis and the general population. Receiver-operating-characteristic analyses showed that using a cut-off score of 76, the AQ-Child has high sensitivity (95%) and specificity (95%). The AQ-Child showed good test–retest reliability and high internal consistency. Factor analysis provided support for four of the five AQ-Child design subscales. Future studies should evaluate how the AQ-C performs in population screening.
The Validity and Reliability of the Language Battery in Comprehensive Aphasia Test-Turkish (CAT-TR)
Aphasia assessment is the initial step of a well-structured language therapy. Therefore, it is reasonable to underline that the assessment tools need to consider the typological and cultural characteristics of the language. A group of international researchers in the Collaboration of Aphasia Trialists have been adapting the Comprehensive Aphasia Test (CAT) into 14 languages spoken in Europe including Turkish. Thus, the aim of this study was to perform the validity and reliability analyses of the Language Battery section of CAT-TR to ensure the assessment of Turkish-speaking people with aphasia (PWA). The test included 21 sub-tests and yielded six modality scores (spoken language comprehension, written language comprehension, repetition, naming, reading, writing). Ninety PWA (MeanAGE = 61.07) and 200 controls (MeanAGE = 54.89) involved in the analyses. The participants were stratified into two education and three age groups. The analyses belonging to content, construct and criterion validity were performed, while the reliability analyses included internal consistency, test-retest and inter-rater reliability. Education influenced all the modality scores of the controls, while age-related differences were significant among all the modality scores except reading. It has to be underlined that Education did not hold any significant effects on the language performance of PWA, whereas those younger than 60 showed statistically better performance in the Spoken and Written Language Comprehension modality scores. The cut-off scores for each modality and Language Battery were presented with high sensitivity and specificity values. Compared to the psychometric characteristics of the adapted versions of CAT and aphasia tests utilized in Turkey, CAT-TR is an appropriate test for the language assessment of Turkish-speaking adults with aphasia.
The “Reading the Mind in the Eyes” Test: Investigation of Psychometric Properties and Test–Retest Reliability of the Persian Version
The psychometric properties of the Persian “Reading the Mind in the Eyes” test were investigated, so were the predictions from the Empathizing–Systemizing theory of psychological sex differences. Adults aged 16–69 years old (N = 545, female = 51.7 %) completed the test online. The analysis of items showed them to be generally acceptable. Test–retest reliability, as measured by Intra-class correlation coefficient, was 0.735 with a 95 % CI of (0.514, 0.855). The percentage of agreement for each item in the test–retest was satisfactory and the mean difference between test–retest scores was −0.159 (SD = 3.42). However, the internal consistency of Persian version, calculated by Cronbach’s alpha (0.371), was poor. Females scored significantly higher than males but academic degree and field of study had no significant effect.
The Psychometric Evaluation of a Speech Production Test Battery for Children: The Reliability and Validity of the Computer Articulation Instrument
Purpose: The aims of this study were to assess the reliability and validity of the Computer Articulation Instrument (CAI), a speech production test battery assessing phonological and speech motor skills in 4 tasks: (1) picture naming, (2) nonword imitation, (3) word and nonword repetition, and (4) maximum repetition rate (MRR). Method: Normative data were collected in 1,524 typically developing Dutch-speaking children (aged between 2;0 and 7;0 [years;months]). Parameters were extracted on segmental and syllabic accuracy (Tasks 1 and 2), consistency (Task 3), and syllables per second (Task 4). Interrater reliability and test-retest reliability were analyzed using subgroups of the normative sample and studied by estimating intraclass correlation coefficients (ICCs). Construct validity was investigated by determining age-related changes of test results and factor analyses of the extracted speech measures. Results: ICCs for interrater reliability ranged from sufficient to good, except for percentage of vowels correct of picture naming and nonword imitation and for the MRRs for bisyllabic and trisyllabic items. The ICCs for test-retest reliability were sufficient (picture naming, nonword imitation) to insufficient (word and nonword repetition, MRR) due to larger-than-expected normal development and learning effects. Continuous norms showed developmental patterns for all CAI parameters. The factor analyses revealed 5 meaningful factors: all picture-naming parameters, the segmental parameters of nonword imitation, the syllabic structure parameters of nonword imitation, (non)word repetition consistency, and all MRR parameters. Conclusion: Its overall sufficient to good psychometric properties indicate that the CAI is a reliable and valid instrument for the assessment of typical and delayed speech development in Dutch children in the ages of 2-7 years.