Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
26,272 result(s) for "Neuropsychological test"
Sort by:
A Systematic Review of Normative Data for Verbal Fluency Test in Different Languages
Verbal fluency tests are easy and quick to use in neuropsychological assessments, so they have been counted among the most classical tools in this context. To date, several normative data for verbal fluency tests have been provided in different languages and countries. A systematic review was carried out with studies that provide normative data for verbal fluency tests. Studies were collected from Scopus, PubMed and Web of Science. 183 studies were retrieved from the database search, of which 73 finally met the inclusion criteria. An analysis of the risk of bias regarding samples selection/characterization and procedure/results reports is conducted for each article. Finally, a full description of the normative data characteristics, considering country and language, verbal fluency task characteristics (type of task) and sample characteristics (number of subjects, gender, age, education) is included. The current systematic review provides an overview and analysis of internationally published normative data that might help clinicians in their search for valid and useful norms on verbal fluency tasks, as well as updated information about qualitative aspects of the different options currently available.
COVID-19 severity is related to poor executive function in people with post-COVID conditions
Patients with post-coronavirus disease 2019 (COVID-19) conditions typically experience cognitive problems. Some studies have linked COVID-19 severity with long-term cognitive damage, while others did not observe such associations. This discrepancy can be attributed to methodological and sample variations. We aimed to clarify the relationship between COVID-19 severity and long-term cognitive outcomes and determine whether the initial symptomatology can predict long-term cognitive problems. Cognitive evaluations were performed on 109 healthy controls and 319 post-COVID individuals categorized into three groups according to the WHO clinical progression scale: severe-critical ( n  = 77), moderate-hospitalized ( n  = 73), and outpatients ( n  = 169). Principal component analysis was used to identify factors associated with symptoms in the acute-phase and cognitive domains. Analyses of variance and regression linear models were used to study intergroup differences and the relationship between initial symptomatology and long-term cognitive problems. The severe-critical group performed significantly worse than the control group in general cognition (Montreal Cognitive Assessment), executive function (Digit symbol, Trail Making Test B, phonetic fluency), and social cognition (Reading the Mind in the Eyes test). Five components of symptoms emerged from the principal component analysis: the “Neurologic/Pain/Dermatologic” “Digestive/Headache”, “Respiratory/Fever/Fatigue/Psychiatric” and “Smell/ Taste” components were predictors of Montreal Cognitive Assessment scores; the “Neurologic/Pain/Dermatologic” component predicted attention and working memory; the “Neurologic/Pain/Dermatologic” and “Respiratory/Fever/Fatigue/Psychiatric” components predicted verbal memory, and the “Respiratory/Fever/Fatigue/Psychiatric,” “Neurologic/Pain/Dermatologic,” and “Digestive/Headache” components predicted executive function. Patients with severe COVID-19 exhibited persistent deficits in executive function. Several initial symptoms were predictors of long-term sequelae, indicating the role of systemic inflammation and neuroinflammation in the acute-phase symptoms of COVID-19.” Study Registration : www.ClinicalTrials.gov , identifier NCT05307549 and NCT05307575.
Prediction of Alzheimer's disease progression within 6 years using speech: A novel approach leveraging language models
INTRODUCTION Identification of individuals with mild cognitive impairment (MCI) who are at risk of developing Alzheimer's disease (AD) is crucial for early intervention and selection of clinical trials. METHODS We applied natural language processing techniques along with machine learning methods to develop a method for automated prediction of progression to AD within 6 years using speech. The study design was evaluated on the neuropsychological test interviews of n = 166 participants from the Framingham Heart Study, comprising 90 progressive MCI and 76 stable MCI cases. RESULTS Our best models, which used features generated from speech data, as well as age, sex, and education level, achieved an accuracy of 78.5% and a sensitivity of 81.1% to predict MCI‐to‐AD progression within 6 years. DISCUSSION The proposed method offers a fully automated procedure, providing an opportunity to develop an inexpensive, broadly accessible, and easy‐to‐administer screening tool for MCI‐to‐AD progression prediction, facilitating development of remote assessment. Highlights Voice recordings from neuropsychological exams coupled with basic demographics can lead to strong predictive models of progression to dementia from mild cognitive impairment. The study leveraged AI methods for speech recognition and processed the resulting text using language models. The developed AI‐powered pipeline can lead to fully automated assessment that could enable remote and cost‐effective screening and prognosis for Alzehimer's disease.
Demographically Corrected Normative Standards for the English Version of the NIH Toolbox Cognition Battery
Demographic factors impact neuropsychological test performances and accounting for them may help to better elucidate current brain functioning. The NIH Toolbox Cognition Battery (NIHTB-CB) is a novel neuropsychological tool, yet the original norms developed for the battery did not adequately account for important demographic/cultural factors known to impact test performances. We developed norms fully adjusting for all demographic variables within each language group (English and Spanish) separately. The current study describes the standards for individuals tested in English. Neurologically healthy adults (n=1038) and children (n=2917) who completed the NIH Toolbox norming project in English were included. We created uncorrected scores weighted to the 2010 Census demographics, and applied polynomial regression models to develop age-corrected and fully demographically adjusted (age, education, sex, race/ethnicity) scores for each NIHTB-CB test and composite (i.e., Fluid, Crystallized, and Total Composites). On uncorrected NIHTB-CB scores, age and education demonstrated significant, medium-to-large associations, while sex showed smaller, but statistically significant effects. In terms of race/ethnicity, a significant stair-step effect on uncorrected NIHTB-CB scores was observed (African American
The Cognition Battery of the NIH Toolbox for Assessment of Neurological and Behavioral Function: Validation in an Adult Sample
This study introduces a special series on validity studies of the Cognition Battery (CB) from the U.S. National Institutes of Health Toolbox for the Assessment of Neurological and Behavioral Function (NIHTB) (Gershon, Wagster et al., 2013) in an adult sample. This first study in the series describes the sample, each of the seven instruments in the NIHTB-CB briefly, and the general approach to data analysis. Data are provided on test–retest reliability and practice effects, and raw scores (mean, standard deviation, range) are presented for each instrument and the gold standard instruments used to measure construct validity. Accompanying papers provide details on each instrument, including information about instrument development, psychometric properties, age and education effects on performance, and convergent and discriminant construct validity. One study in the series is devoted to a factor analysis of the NIHTB-CB in adults and another describes the psychometric properties of three composite scores derived from the individual measures representing fluid and crystallized abilities and their combination. The NIHTB-CB is designed to provide a brief, comprehensive, common set of measures to allow comparisons among disparate studies and to improve scientific communication. (JINS, 2014, 20, 1–12)
A new Asian version of the CFMT: The Cambridge Face Memory Test – Chinese Malaysian (CFMT-MY)
The Cambridge Face Memory Test (CFMT) is one of the most important measures of individual differences in face recognition and for the diagnosis of prosopagnosia. Having two different CFMT versions using a different set of faces seems to improve the reliability of the evaluation. However, at the present time, there is only one Asian version of the test. In this study, we present the Cambridge Face Memory Test – Chinese Malaysian (CFMT-MY), a novel Asian CFMT using Chinese Malaysian faces. In Experiment 1 , Chinese Malaysian participants ( N = 134) completed two versions of the Asian CFMT and one object recognition test. The CFMT-MY showed a normal distribution, high internal reliability, high consistency and presented convergent and divergent validity. Additionally, in contrast to the original Asian CFMT, the CFMT-MY showed an increasing level of difficulties across stages. In Experiment 2 , Caucasian participants ( N = 135) completed the two versions of the Asian CFMT and the original Caucasian CFMT. Results showed that the CFMT-MY exhibited the other-race effect. Overall, the CFMT-MY seems to be suitable for the diagnosis of face recognition difficulties and could be used as a measure of face recognition ability by researchers who wish to examine face-related research questions such as individual differences or the other-race effect.
Comparing Web-Based and Lab-Based Cognitive Assessment Using the Cambridge Neuropsychological Test Automated Battery: A Within-Subjects Counterbalanced Study
Computerized assessments are already used to derive accurate and reliable measures of cognitive function. Web-based cognitive assessment could improve the accessibility and flexibility of research and clinical assessment, widen participation, and promote research recruitment while simultaneously reducing costs. However, differences in context may influence task performance. This study aims to determine the comparability of an unsupervised, web-based administration of the Cambridge Neuropsychological Test Automated Battery (CANTAB) against a typical in-person lab-based assessment, using a within-subjects counterbalanced design. The study aims to test (1) reliability, quantifying the relationship between measurements across settings using correlational approaches; (2) equivalence, the extent to which test results in different settings produce similar overall results; and (3) agreement, by quantifying acceptable limits to bias and differences between measurement environments. A total of 51 healthy adults (32 women and 19 men; mean age 36.8, SD 15.6 years) completed 2 testing sessions, which were completed on average 1 week apart (SD 4.5 days). Assessments included equivalent tests of emotion recognition (emotion recognition task [ERT]), visual recognition (pattern recognition memory [PRM]), episodic memory (paired associate learning [PAL]), working memory and spatial planning (spatial working memory [SWM] and one touch stockings of Cambridge), and sustained attention (rapid visual information processing [RVP]). Participants were randomly allocated to one of the two groups, either assessed in-person in the laboratory first (n=33) or with unsupervised web-based assessments on their personal computing systems first (n=18). Performance indices (errors, correct trials, and response sensitivity) and median reaction times were extracted. Intraclass and bivariate correlations examined intersetting reliability, linear mixed models and Bayesian paired sample t tests tested for equivalence, and Bland-Altman plots examined agreement. Intraclass correlation (ICC) coefficients ranged from ρ=0.23-0.67, with high correlations in 3 performance indices (from PAL, SWM, and RVP tasks; ρ≥0.60). High ICC values were also seen for reaction time measures from 2 tasks (PRM and ERT tasks; ρ≥0.60). However, reaction times were slower during web-based assessments, which undermined both equivalence and agreement for reaction time measures. Performance indices did not differ between assessment settings and generally showed satisfactory agreement. Our findings support the comparability of CANTAB performance indices (errors, correct trials, and response sensitivity) in unsupervised, web-based assessments with in-person and laboratory tests. Reaction times are not as easily translatable from in-person to web-based testing, likely due to variations in computer hardware. The results underline the importance of examining more than one index to ascertain comparability, as high correlations can present in the context of systematic differences, which are a product of differences between measurement environments. Further work is now needed to examine web-based assessments in clinical populations and in larger samples to improve sensitivity for detecting subtler differences between test settings.
Comorbidities confound Alzheimer’s blood tests
The concentrations of two key blood biomarkers for Alzheimer’s disease are affected by some medical conditions, which could potentially lead to misdiagnosis.
Clinical Manifestations
Harmonization of neuropsychological assessment for vascular cognitive disorders (VCD) is important for ensuring the highest standards for diagnostic and post-diagnostic care. A battery jointly proposed by the NINDS-CSN has received much support. Considering significant developments in the field, and an urgent need for consensus on remote and computerised assessment methods, an international expert group was commissioned to develop an updated harmonized battery and associated assessment guidelines for VCD using the Delphi process. A modified Delphi consensus method was used, involving an iterative, multi-staged series of structured surveys with feedback of anonymized responses from experts in the neuropsychological assessment of vascular cognitive disorders. Three rounds were planned, with the possibility of a fourth round, if required to reach consensus. Literature reviews on harmonized neuropsychological assessment were conducted by a team of researchers, which informed the first structured questionnaire. Consensus was sought on the cognitive domains and subdomains that should be assessed, on specific tests per domain, and for additional guidelines including on non-traditional assessment methods and cultural-linguistic considerations. Consensus was defined as agreement or disagreement on any statement of ≥ 75%, near consensus as 66 - 75%, and <66% as non-consensus. Statements that reached consensus were removed from subsequent rounds, as were most that had non-consensus, with some being reworded. A virtual meeting was held for experts to discuss contentious issues and advise on the process. Forty-four experts in neuropsychological assessment from a range of international regions consented to participate, and 31 completed the Round 1 survey. The final survey is being completed, then the assessment battery and additional guidelines will be finalized to be approved formally by all participants. The harmonized neuropsychological assessment standards could be adopted internationally and complement the NINDS-CSN battery, thereby further facilitating consistent neuropsychological assessment of VCD between clinicians and researchers.
Association Between the Digital Clock Drawing Test and Neuropsychological Test Performance: Large Community-Based Prospective Cohort (Framingham Heart Study)
The Clock Drawing Test (CDT) has been widely used in clinic for cognitive assessment. Recently, a digital Clock Drawing Text (dCDT) that is able to capture the entire sequence of clock drawing behaviors was introduced. While a variety of domain-specific features can be derived from the dCDT, it has not yet been evaluated in a large community-based population whether the features derived from the dCDT correlate with cognitive function. We aimed to investigate the association between dCDT features and cognitive performance across multiple domains. Participants from the Framingham Heart Study, a large community-based cohort with longitudinal cognitive surveillance, who did not have dementia were included. Participants were administered both the dCDT and a standard protocol of neuropsychological tests that measured a wide range of cognitive functions. A total of 105 features were derived from the dCDT, and their associations with 18 neuropsychological tests were assessed with linear regression models adjusted for age and sex. Associations between a composite score from dCDT features were also assessed for associations with each neuropsychological test and cognitive status (clinically diagnosed mild cognitive impairment compared to normal cognition). The study included 2062 participants (age: mean 62, SD 13 years, 51.6% women), among whom 36 were diagnosed with mild cognitive impairment. Each neuropsychological test was associated with an average of 50 dCDT features. The composite scores derived from dCDT features were significantly associated with both neuropsychological tests and mild cognitive impairment. The dCDT can potentially be used as a tool for cognitive assessment in large community-based populations.