Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
44,520 result(s) for "Cognitive Tests"
Sort by:
Measuring Listening Effort: Convergent Validity, Sensitivity, and Links With Cognitive and Personality Measures
Purpose: Listening effort (LE) describes the attentional or cognitive requirements for successful listening. Despite substantial theoretical and clinical interest in LE, inconsistent operationalization makes it difficult to make generalizations across studies. The aims of this large-scale validation study were to evaluate the convergent validity and sensitivity of commonly used measures of LE and assess how scores on those tasks relate to cognitive and personality variables. Method: Young adults with normal hearing (N = 111) completed 7 tasks designed to measure LE, 5 tests of cognitive ability, and 2 personality measures. Results: Scores on some behavioral LE tasks were moderately intercorrelated but were generally not correlated with subjective and physiological measures of LE, suggesting that these tasks may not be tapping into the same underlying construct. LE measures differed in their sensitivity to changes in signal-to-noise ratio and the extent to which they correlated with cognitive and personality variables. Conclusions: Given that LE measures do not show consistent, strong intercorrelations and differ in their relationships with cognitive and personality predictors, these findings suggest caution in generalizing across studies that use different measures of LE. The results also indicate that people with greater cognitive ability appear to use their resources more efficiently, thereby diminishing the detrimental effects associated with increased background noise during language processing.
The cultural and linguistic adaptation of the Oxford Cognitive Screen to Tamil
The Oxford Cognitive Screen (OCS) is a screening tool to assess stroke patients for deficits in attention, executive functions, language, praxis, numeric cognition, and memory. In this study, the OCS was culturally and linguistically adapted to Tamil, for use in India (OCS TA), considering the differences between formal and spoken versions of Tamil and consideration of its phonetic complexity. We adopted two-parallel form versions of the OCS and generated normative data for them. We recruited 181 healthy controls (Mean = 39.27 years, SD 16.52) (141 completed version A, 40 completed version B, 33 completed version A and B) and compared the data with the original UK normative sample. In addition, 28 native Tamil-speaking patients who had a stroke in the past three years (Mean = 62.76 years, SD 9.14) were assessed. Convergent validity was assessed with subtasks from Addenbrooke's Cognitive Examination III (ACE-III). We found significant differences between the UK normative group and the OCS TA normative group in age and education. Tamil-specific norms were used to adapt the cutoffs for the memory, gesture imitation, and executive function tasks. When domain-specific scores on the ACE-III were compared, OCS TA exhibited strong convergent validity. The OCS TA has shown the potential to be a useful screening tool for stroke survivors among Tamil speakers with the two-parallel forms demonstrating good equivalence. Further empirical evidence from larger studies is required to establish their psychometric performance and clinical validity.
Review of brief cognitive tests for patients with suspected dementia
As the population ages, it is increasingly important to use effective short cognitive tests for suspected dementia. We aimed to review systematically brief cognitive tests for suspected dementia and report on their validation in different settings, to help clinicians choose rapid and appropriate tests. Electronic search for face-to-face sensitive and specific cognitive tests for people with suspected dementia, taking ≤ 20 minutes, providing quantitative psychometric data. 22 tests fitted criteria. Mini-Mental State Examination (MMSE) and Hopkins Verbal Learning Test (HVLT) had good psychometric properties in primary care. In the secondary care settings, MMSE has considerable data but lacks sensitivity. 6-Item Cognitive Impairment Test (6CIT), Brief Alzheimer's Screen, HVLT, and 7 Minute Screen have good properties for detecting dementia but need further validation. Addenbrooke's Cognitive Examination (ACE) and Montreal Cognitive Assessment are effective to detect dementia with Parkinson's disease and Addenbrooke's Cognitive Examination-Revised (ACE-R) is useful for all dementias when shorter tests are inconclusive. Rowland Universal Dementia Assessment scale (RUDAS) is useful when literacy is low. Tests such as Test for Early Detection of Dementia, Test Your Memory, Cognitive Assessment Screening Test (CAST) and the recently developed ACE-III show promise but need validation in different settings, populations, and dementia subtypes. Validation of tests such as 6CIT, Abbreviated Mental Test is also needed for dementia screening in acute hospital settings. Practitioners should use tests as appropriate to the setting and individual patient. More validation of available tests is needed rather than development of new ones.
cCOG: A web‐based cognitive test tool for detecting neurodegenerative disorders
Introduction Web‐based cognitive tests have potential for standardized screening in neurodegenerative disorders. We examined accuracy and consistency of cCOG, a computerized cognitive tool, in detecting mild cognitive impairment (MCI) and dementia. Methods Clinical data of 306 cognitively normal, 120 mild cognitive impairment (MCI), and 69 dementia subjects from three European cohorts were analyzed. Global cognitive score was defined from standard neuropsychological tests and compared to the corresponding estimated score from the cCOG tool containing seven subtasks. The consistency of cCOG was assessed comparing measurements administered in clinical settings and in the home environment. Results cCOG produced accuracies (receiver operating characteristic‐area under the curve [ROC‐AUC]) between 0.71 and 0.84 in detecting MCI and 0.86 and 0.94 in detecting dementia when administered at the clinic and at home. The accuracy was comparable to the results of standard neuropsychological tests (AUC 0.69–0.77 MCI/0.91–0.92 dementia). Discussion cCOG provides a promising tool for detecting MCI and dementia with potential for a cost‐effective approach including home‐based cognitive assessments.
How does anxiety influence language performance? From the perspectives of foreign language classroom anxiety and cognitive test anxiety
This study examined the relationships between students’ foreign language classroom anxiety and cognitive test anxiety and their College English Test Band 4 (CET-4) performance. A questionnaire was distributed to 921 Chinese university students to understand the nature and degree of the examined relationships. Follow-up interviews with 12 students were used to shed further light on uncovering mechanisms of relationships found in the survey. Results revealed three factors of anxiety, explaining 43.14% of the total variance examined in the quesionnaire items. Means, standard deviations, the internal consistency for each factor, and zero-order correlations among the three factors were calculated. Correlation and multiple regression of the anxiety factors and test scores were then conducted. Results confirmed that cognitive test anxiety factor was a significant negative predictor of language achievement. Interview results did not fully support the relationships found in the survey. Most students did not perceive themselves to be very anxious in their university settings, either in classrooms or in testing situations. However, they did express their anxiety toward English speaking skills in the classroom. The differential perspectives of anxiety revealed from both analyses indicate that a better understanding of language classroom anxiety and cognitive test anxiety can help students and teachers optimize their foreign langauge learning and teaching practices.
Has the Standard Cognitive Reflection Test Become a Victim of Its Own Success?
The Cognitive Reflection Test (CRT) is a hugely influential problem solving task that measures individual differences in the propensity to reflect on and override intuitive (but incorrect) solutions. The validity of this three-item measure depends on participants being naïve to its materials and objectives. Evidence from 142 volunteers recruited online suggests this is often not the case. Over half of the sample had previously seen at least one of the problems, predominantly through research participation or the media. These participants produced substantially higher CRT scores than those without prior exposure (2.36 vs. 1.48), with the majority scoring at ceiling level. Participants that had previously seen a specific problem (e.g., the bat and ball problem) nearly always solved that problem correctly. These data suggest the CRT may have been widely invalidated. As a minimum, researchers must control for prior exposure to the three problems and begin to consider alternative, extended measures of cognitive reflection.
National Non-verbal Cognitive Ability Test (BNV) Development Study
The aim of the present study is to develop a national non-verbal cognitive ability test in Turkey. Test items were developed during the first stage and applied as a pilot study on 3,073 children in the age interval of 4 to 13. The test was given its final form based on the values of item difficulty, item distinctiveness, item total score correlation. Norm study was carried out at 12 different provinces with a total of 9,129 children comprised of 4,464 females (49%) and 4,665 (51%) males. Test-retest, split-halves, KR-20, and KR-21 methods were applied for the reliability analyses of the study. Standard error, standard deviation, and reliability coefficient were calculated for the measurement. Content and construct validity along with criterion-related validity analysis methods were used for validity analyses. The KR-20 reliability coefficient obtained from the complete sample group was estimated as 0.92. Test-retest reliability coefficient was determined as 0.80. A correlation of .71 was determined between Naglieri Cognitive Ability test and BNV test. A correlation of .67 was determined between Toni-3 test and BNV test while a correlation of .86 was determined between BNV and Colored Progressive Matrices Test.
Validation of SATURN, a free, electronic, self‐administered cognitive screening test
Background Cognitive screening is limited by clinician time and variability in administration and scoring. We therefore developed Self‐Administered Tasks Uncovering Risk of Neurodegeneration (SATURN), a free, public‐domain, self‐administered, and automatically scored cognitive screening test, and validated it on inexpensive (<$100) computer tablets. Methods SATURN is a 30‐point test including orientation, word recall, and math items adapted from the Saint Louis University Mental Status test, modified versions of the Stroop and Trails tasks, and other assessments of visuospatial function and memory. English‐speaking neurology clinic patients and their partners 50 to 89 years of age were given SATURN, the Montreal Cognitive Assessment (MoCA), and a brief survey about test preferences. For patients recruited from dementia clinics (n = 23), clinical status was quantified with the Clinical Dementia Rating (CDR) scale. Care partners (n = 37) were assigned CDR = 0. Results SATURN and MoCA scores were highly correlated (P < .00001; r = 0.90). CDR sum‐of‐boxes scores were well‐correlated with both tests (P < .00001) (r = −0.83 and −0.86, respectively). Statistically, neither test was superior. Most participants (83%) reported that SATURN was easy to use, and most either preferred SATURN over the MoCA (47%) or had no preference (32%). Discussion Performance on SATURN—a fully self‐administered and freely available (https://doi.org/10.5061/dryad.02v6wwpzr) cognitive screening test—is well‐correlated with MoCA and CDR scores.
Extending Cognitive Load Theory to Incorporate Working Memory Resource Depletion: Evidence from the Spacing Effect
Depletion of limited working memory resources may occur following extensive mental effort resulting in decreased performance compared to conditions requiring less extensive mental effort. This \"depletion effect\" can be incorporated into cognitive load theory that is concerned with using the properties of human cognitive architecture, especially working memory, when designing instruction. Two experiments were carried out on the spacing effect that occurs when learning that is spaced by temporal gaps between learning episodes is superior to identical, massed learning with no gaps between learning episodes. Using primary school students learning mathematics, it was found that students obtained lower scores on a working memory capacity test (Experiments 1 and 2) and higher ratings of cognitive load (Experiment 2) after massed than after spaced practice. The reduction in working memory capacity may be attributed to working memory resource depletion following the relatively prolonged mental effort associated with massed compared to spaced practice. An expansion of cognitive load theory to incorporate working memory resource depletion along with instructional design implications, including the spacing effect, is discussed.