Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
25,137 result(s) for "Neuropsychological tests"
Sort by:
Comparing Web-Based and Lab-Based Cognitive Assessment Using the Cambridge Neuropsychological Test Automated Battery: A Within-Subjects Counterbalanced Study
Computerized assessments are already used to derive accurate and reliable measures of cognitive function. Web-based cognitive assessment could improve the accessibility and flexibility of research and clinical assessment, widen participation, and promote research recruitment while simultaneously reducing costs. However, differences in context may influence task performance. This study aims to determine the comparability of an unsupervised, web-based administration of the Cambridge Neuropsychological Test Automated Battery (CANTAB) against a typical in-person lab-based assessment, using a within-subjects counterbalanced design. The study aims to test (1) reliability, quantifying the relationship between measurements across settings using correlational approaches; (2) equivalence, the extent to which test results in different settings produce similar overall results; and (3) agreement, by quantifying acceptable limits to bias and differences between measurement environments. A total of 51 healthy adults (32 women and 19 men; mean age 36.8, SD 15.6 years) completed 2 testing sessions, which were completed on average 1 week apart (SD 4.5 days). Assessments included equivalent tests of emotion recognition (emotion recognition task [ERT]), visual recognition (pattern recognition memory [PRM]), episodic memory (paired associate learning [PAL]), working memory and spatial planning (spatial working memory [SWM] and one touch stockings of Cambridge), and sustained attention (rapid visual information processing [RVP]). Participants were randomly allocated to one of the two groups, either assessed in-person in the laboratory first (n=33) or with unsupervised web-based assessments on their personal computing systems first (n=18). Performance indices (errors, correct trials, and response sensitivity) and median reaction times were extracted. Intraclass and bivariate correlations examined intersetting reliability, linear mixed models and Bayesian paired sample t tests tested for equivalence, and Bland-Altman plots examined agreement. Intraclass correlation (ICC) coefficients ranged from ρ=0.23-0.67, with high correlations in 3 performance indices (from PAL, SWM, and RVP tasks; ρ≥0.60). High ICC values were also seen for reaction time measures from 2 tasks (PRM and ERT tasks; ρ≥0.60). However, reaction times were slower during web-based assessments, which undermined both equivalence and agreement for reaction time measures. Performance indices did not differ between assessment settings and generally showed satisfactory agreement. Our findings support the comparability of CANTAB performance indices (errors, correct trials, and response sensitivity) in unsupervised, web-based assessments with in-person and laboratory tests. Reaction times are not as easily translatable from in-person to web-based testing, likely due to variations in computer hardware. The results underline the importance of examining more than one index to ascertain comparability, as high correlations can present in the context of systematic differences, which are a product of differences between measurement environments. Further work is now needed to examine web-based assessments in clinical populations and in larger samples to improve sensitivity for detecting subtler differences between test settings.
Clinical Manifestations
Harmonization of neuropsychological assessment for vascular cognitive disorders (VCD) is important for ensuring the highest standards for diagnostic and post-diagnostic care. A battery jointly proposed by the NINDS-CSN has received much support. Considering significant developments in the field, and an urgent need for consensus on remote and computerised assessment methods, an international expert group was commissioned to develop an updated harmonized battery and associated assessment guidelines for VCD using the Delphi process. A modified Delphi consensus method was used, involving an iterative, multi-staged series of structured surveys with feedback of anonymized responses from experts in the neuropsychological assessment of vascular cognitive disorders. Three rounds were planned, with the possibility of a fourth round, if required to reach consensus. Literature reviews on harmonized neuropsychological assessment were conducted by a team of researchers, which informed the first structured questionnaire. Consensus was sought on the cognitive domains and subdomains that should be assessed, on specific tests per domain, and for additional guidelines including on non-traditional assessment methods and cultural-linguistic considerations. Consensus was defined as agreement or disagreement on any statement of ≥ 75%, near consensus as 66 - 75%, and <66% as non-consensus. Statements that reached consensus were removed from subsequent rounds, as were most that had non-consensus, with some being reworded. A virtual meeting was held for experts to discuss contentious issues and advise on the process. Forty-four experts in neuropsychological assessment from a range of international regions consented to participate, and 31 completed the Round 1 survey. The final survey is being completed, then the assessment battery and additional guidelines will be finalized to be approved formally by all participants. The harmonized neuropsychological assessment standards could be adopted internationally and complement the NINDS-CSN battery, thereby further facilitating consistent neuropsychological assessment of VCD between clinicians and researchers.
Timing of onset of cognitive decline: results from Whitehall II prospective cohort study
Objectives To estimate 10 year decline in cognitive function from longitudinal data in a middle aged cohort and to examine whether age cohorts can be compared with cross sectional data to infer the effect of age on cognitive decline. Design Prospective cohort study. At study inception in 1985-8, there were 10 308 participants, representing a recruitment rate of 73%. Setting Civil service departments in London, United Kingdom. Participants 5198 men and 2192 women, aged 45-70 at the beginning of cognitive testing in 1997-9. Main outcome measure Tests of memory, reasoning, vocabulary, and phonemic and semantic fluency, assessed three times over 10 years. Results All cognitive scores, except vocabulary, declined in all five age categories (age 45-49, 50-54, 55-59, 60-64, and 65-70 at baseline), with evidence of faster decline in older people. In men, the 10 year decline, shown as change/range of test×100, in reasoning was −3.6% (95% confidence interval −4.1% to −3.0%) in those aged 45-49 at baseline and −9.6% (−10.6% to −8.6%) in those aged 65-70. In women, the corresponding decline was −3.6% (−4.6% to −2.7%) and −7.4% (−9.1% to −5.7%). Comparisons of longitudinal and cross sectional effects of age suggest that the latter overestimate decline in women because of cohort differences in education. For example, in women aged 45-49 the longitudinal analysis showed reasoning to have declined by −3.6% (−4.5% to −2.8%) but the cross sectional effects suggested a decline of −11.4% (−14.0% to −8.9%). Conclusions Cognitive decline is already evident in middle age (age 45-49).
A meta-analysis of neuropsychological markers of vulnerability to suicidal behavior in mood disorders
Suicidal behavior results from a complex interplay between stressful events and vulnerability factors, including cognitive deficits. However, it is not clear which cognitive tests may best reveal this vulnerability. The objective was to identify neuropsychological tests of vulnerability to suicidal acts in patients with mood disorders. A search was made of Medline, EMBASE and PsycINFO databases, and article references. A total of 25 studies (2323 participants) met the selection criteria. A total of seven neuropsychological tests [Iowa gambling task (IGT), Stroop test, trail making test part B, Wisconsin card sorting test, category and semantic verbal fluencies, and continuous performance test] were used in at least three studies to be analysed. IGT and category verbal fluency performances were lower in suicide attempters than in patient controls [respectively, g = -0.47, 95% confidence interval (CI) -0.65 to -0.29 and g = -0.32, 95% CI -0.60 to -0.04] and healthy controls, with no difference between the last two groups. Stroop performance was lower in suicide attempters than in patient controls (g = 0.37, 95% CI 0.10-0.63) and healthy controls, with patient controls scoring lower than healthy controls. The four other tests were altered in both patient groups versus healthy controls but did not differ between patient groups. Deficits in decision-making, category verbal fluency and the Stroop interference test were associated with histories of suicidal behavior in patients with mood disorders. Altered value-based and cognitive control processes may be important factors of suicidal vulnerability. These tests may also have the potential of guiding therapeutic interventions and becoming part of future systematic assessment of suicide risk.
Clinical Manifestations
Previous validation studies demonstrated that BrainCheck Assess (BC-Assess), a computerized cognitive test battery, can reliably and sensitively distinguish individuals with different levels of cognitive impairment (i.e., normal cognition (NC), mild cognitive impairment (MCI), and dementia). Compared with other traditional paper-based cognitive screening instruments commonly used in clinical practice, the MoCA is generally accepted to be among the most comprehensive and robust screening tools, with high sensitivity/specificity in distinguishing MCI from NC and dementia. In this study, we examined: (1) the linear relationship between BC-Assess and MoCA and their equivalent cut-off scores, and (2) the extent to which they agree on their impressions of an individual's cognitive status. A subset of participants (N = 55; age range 54-94, mean/SD = 80/9.5) from two previous studies who took both the MoCA and BC-Assess were included in this analysis. Linear regression was used to calculate equivalent cut-off scores for BC-Assess based on those originally recommended for the MoCA to differentiate MCI from NC (cut-off = 26), and dementia from MCI (cut-off = 19). Impression agreement between the two instruments were measured through overall agreement (OA), positive percent agreement (PPA), and negative percent agreement (NPA). A high Pearson correlation coefficient of 0.77 (CI = 0.63 - 0.86) was observed between the two scores. According to this relationship, MoCA cutoffs of 26 and 19 correspond to BC-Assess scores of 89.6 and 68.5, respectively. These scores are highly consistent with the currently recommended BC-Assess cutoffs (i.e. 85 and 70). The two instruments also show a high degree of agreement in their impressions based on their recommended cut-offs: (i) OA = 70.9%, PPA = 70.4%, NPA = 71.4% for differentiating dementia from MCI/NC; (ii) OA = 83.6%, PPA = 84.1%, NPA = 81.8% for differentiating dementia/MCI from NC. This study provides further validation of BC-Assess in a sample of older adults by showing its high correlation and agreement in impression with the widely used MoCA.