Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
366 result(s) for "Digital cognitive assessments"
Sort by:
A Rapid, Mobile Neurocognitive Screening Test to Aid in Identifying Cognitive Impairment and Dementia (BrainCheck): Cohort Study
The US population over the age of 65 is expected to double by the year 2050. Concordantly, the incidence of dementia is projected to increase. The subclinical stage of dementia begins years before signs and symptoms appear. Early detection of cognitive impairment and/or cognitive decline may allow for interventions to slow its progression. Furthermore, early detection may allow for implementation of care plans that may affect the quality of life of those affected and their caregivers. We sought to determine the accuracy and validity of BrainCheck Memory as a diagnostic aid for age-related cognitive impairment, as compared against physician diagnosis and other commonly used neurocognitive screening tests, including the Saint Louis University Mental Status (SLUMS) exam, the Mini-Mental State Examination (MMSE), and the Montreal Cognitive Assessment (MoCA). We tested 583 volunteers over the age of 49 from various community centers and living facilities in Houston, Texas. The volunteers were divided into five cohorts: a normative population and four comparison groups for the SLUMS exam, the MMSE, the MoCA, and physician diagnosis. Each comparison group completed their respective assessment and BrainCheck Memory. A total of 398 subjects were included in the normative population. A total of 84 participants were in the SLUMS exam cohort, 51 in the MMSE cohort, 35 in the MoCA cohort, and 18 in the physician cohort. BrainCheck Memory assessments were significantly correlated to the SLUMS exam, with coefficients ranging from .5 to .7. Correlation coefficients for the MMSE and BrainCheck and the MoCA and BrainCheck were also significant. Of the 18 subjects evaluated by a physician, 9 (50%) were healthy, 6 (33%) were moderately impaired, and 3 (17%) were severely impaired. A significant difference was found between the severely and moderately impaired subjects and the healthy subjects (P=.02). We derived a BrainCheck Memory composite score that showed stronger correlations with the standard assessments as compared to the individual BrainCheck assessments. Receiver operating characteristic (ROC) curve analysis of this composite score found a sensitivity of 81% and a specificity of 94%. BrainCheck Memory provides a sensitive and specific metric for age-related cognitive impairment in older adults, with the advantages of a mobile, digital, and easy-to-use test. ClinicalTrials.gov NCT03608722; https://clinicaltrials.gov/ct2/show/NCT03608722 (Archived by WebCite at http://www.webcitation.org/76JLoYUGf).
Practice effects on digital cognitive assessment tools: insights from the defense automated neurobehavioral assessment battery
INTRODUCTION Digital cognitive assessments offer a promising approach to monitoring cognitive impairments, but repeated use can introduce practice effects, potentially masking changes in cognitive status. We evaluated practice effects using the Defense Automated Neurobehavioral Assessment (DANA), a digital battery designed for cognitive monitoring. METHODS We analyzed data from 116 participants from the Boston University Alzheimer's Disease Research Center, comparing response times across two DANA sessions, around 90 days apart, while controlling for cognitive status, sex, age, and education. RESULTS Modest practice effects were found, and cognitive impairment was associated with slower response times in several tasks. Classification models, including logistic regression and random forest classification, achieved accuracies of up to 71% in assessing cognitive status. DISCUSSION Our study establishes a framework for evaluating practice effects in digital cognitive assessment tools. Future work should expand the sample size and diversity to enhance the generalizability of findings in broader clinical contexts. Highlights We systematically evaluated practice effects using the DANA battery as a case study. Modest practice effects were observed across two testing sessions, with a median inter‐session interval of 93 days. Cognitive impairment was significantly associated with slower response times in key tasks (p < 0.001). Our framework offers a systematic approach for evaluating practice effects in digital cognitive tools.
Digital cognitive assessments as low‐burden markers for predicting future cognitive decline and tau accumulation across the Alzheimer's spectrum
BACKGROUND Digital cognitive assessments, particularly those that can be done at home, present as low‐burden biomarkers for participants and patients alike, but their effectiveness in the diagnosis of Alzheimer's disease (AD) or predicting its trajectory is still unclear. Here, we assessed what utility or added value these digital cognitive assessments provide for identifying those at high risk of cognitive decline. METHODS We analyzed >500 Alzheimer's Disease Neuroimaging Initiative participants who underwent a brief digital cognitive assessment and amyloid beta (Aβ)/tau positron emission tomography scans, examining their ability to distinguish cognitive status and predict cognitive decline. RESULTS Performance on the digital cognitive assessment was superior to both cortical Aβ and entorhinal tau in detecting mild cognitive impairment and future cognitive decline, with mnemonic discrimination deficits emerging as the most critical measure for predicting decline and future tau accumulation. DISCUSSION Digital assessments are effective at identifying at‐risk individuals, supporting their utility as low‐burden tools for early AD detection and monitoring. Highlights Performance on digital cognitive assessments predicts progression to mild cognitive impairment at a higher proficiency compared to amyloid beta and tau. Deficits in mnemonic discrimination are indicative of future cognitive decline. Impaired mnemonic discrimination predicts future entorhinal and inferior temporal tau.
Integrating plasma p‐tau217 and digital cognitive assessments for early detection in Alzheimer's disease
INTRODUCTION Plasma phosphorylated tau (p‐tau)217 is an early Alzheimer's disease (AD) biomarker, but the timing of pathological changes and cognitive decline varies substantially. Digital cognitive assessments can detect subtle cognitive changes, suggesting they may complement p‐tau217 for early detection. Here, we evaluate whether combining these tools improves the detection of individuals at risk for future decline. METHODS We analyzed 954 amyloid‐positive cognitively unimpaired individuals who completed a digital cognitive assessment and a blood test for p‐tau217, assessing their ability to predict future decline on the Preclinical Alzheimer Cognitive Composite (PACC) and Mini‐Mental State Examination (MMSE). RESULTS Combining performance on a digital cognitive assessment with p‐tau217 improved identification of individuals who declined on the PACC and MMSE in the next 5 years. The predictive value was stronger in apolipoprotein E ε4 noncarriers but did not differ by sex. DISCUSSION This approach offers a sensitive method for identifying individuals at high risk for AD‐related cognitive decline. Highlights Combining plasma phosphorylated tau 217 with baseline digital cognitive assessment improved the prediction of cognitive decline on gold‐standard neuropsychological tests over the next 5 years, achieving greater accuracy than either measure alone. This combination also predicted a decline in a global cognitive screening test. Pairing a blood test with a digital cognitive assessment offers a scalable and feasible approach for Alzheimer's disease screening.
Validation status of cognitive digital assessments by the FDA BEST framework and context of use in preclinical AD studies: A systematic review
Digital cognitive assessments have rapidly expanded in Alzheimer's disease (AD) research, offering a sensitive, scalable, and cost‐effective alternative to traditional neuropsychological tests. This systematic review examines the validation and utility of digital cognitive assessments in cognitively normal (CN) individuals and explores their potential classification within the US Food and Drug Administration's Biomarkers, Endpoints, and other Tools (FDA BEST) framework. Additionally, we provide recommendations to consider for their implementation. Following the Preferred Reporting Items for Systematic Reviews and Meta‐Analyses (PRISMA) guidelines, we searched PubMed for studies validating digital cognitive tools against paper‐based tests and standard AD biomarkers, including measures of amyloid beta and tau in fluid and neuroimaging biomarkers. Our findings suggest potential use as risk or monitoring biomarkers, though further longitudinal validation is needed. This review highlights the latest advancements in digital cognitive assessments, their role as novel AD biomarkers, and essential considerations for their effective use in AD. Highlights Digital cognitive assessments for preclinical Alzheimer's disease (AD) are associated with established biomarkers, including paper‐based neuropsychological tests and amyloid beta and tau measures in both fluid and neuroimaging techniques. These assessments have the potential to serve as novel AD biomarkers classified within the US Food and Drug Administration's Biomarkers, Endpoints, and other Tools framework and context of use, but long‐term studies spanning different disease stages are needed to fully establish their validity for some of the biomarker categories. Several biases may be present when conducting digital cognitive assessments; their optimal use should follow specific recommendations to minimize them.
The ADNI Administrative Core: Ensuring ADNI's success and informing future AD clinical trials
The Alzheimer's Disease Neuroimaging Initiative (ADNI) Administrative Core oversees and coordinates all ADNI activities, to ensure the success and maximize the impact of ADNI in advancing Alzheimer's disease (AD) research and clinical trials. It manages finances and develops policies for data sharing, publications using ADNI data, and access to ADNI biospecimens. The Core develops and executes pilot projects to guide future ADNI activities and identifies key innovative methods for inclusion in ADNI. For ADNI4, the Administrative Core collaborates with the Engagement, Clinical, and Biomarker Cores to develop and evaluate novel, digital methods and infrastructure for participant recruitment, screening, and assessment of participants. The goal of these efforts is to enroll 500 participants, including > 50% from underrepresented populations, 40% with mild cognitive impairment, and 80% with elevated AD biomarkers. This new approach also provides a unique opportunity to validate novel methods. Highlights The Alzheimer's Disease Neuroimaging Initiative (ADNI) Administrative Core oversees and coordinates all ADNI activities. The overall goal is to ensure ADNI's success and help design future Alzheimer's disease (AD) clinical trials. A key innovation is data sharing without embargo to maximize scientific impact. For ADNI4, novel, digital methods for recruitment and assessment were developed. New methods are designed to improve the participation of underrepresented populations.
Digital Clock and Recall is superior to the Mini-Mental State Examination for the detection of mild cognitive impairment and mild dementia
Background Disease-modifying treatments for Alzheimer’s disease highlight the need for early detection of cognitive decline. However, at present, most primary care providers do not perform routine cognitive testing, in part due to a lack of access to practical cognitive assessments, as well as time and resources to administer and interpret the tests. Brief and sensitive digital cognitive assessments, such as the Digital Clock and Recall (DCR™), have the potential to address this need. Here, we examine the advantages of DCR over the Mini-Mental State Examination (MMSE) in detecting mild cognitive impairment (MCI) and mild dementia. Methods We studied 706 participants from the multisite Bio-Hermes study (age mean ± SD = 71.5 ± 6.7; 58.9% female; years of education mean ± SD = 15.4 ± 2.7; primary language English), classified as cognitively unimpaired (CU; n  = 360), mild cognitive impairment (MCI; n  = 234), or probable mild Alzheimer’s dementia (pAD; n  = 111) based on a review of medical history with selected cognitive and imaging tests. We evaluated cognitive classifications (MCI and early dementia) based on the DCR and the MMSE against cohorts based on the results of the Rey Auditory Verbal Learning Test (RAVLT), the Trail Making Test-Part B (TMT-B), and the Functional Activities Questionnaire (FAQ). We also compared the influence of demographic variables such as race (White vs. Non-White), ethnicity (Hispanic vs. Non-Hispanic), and level of education (≥ 15 years vs. < 15 years) on the DCR and MMSE scores. Results The DCR was superior on average to the MMSE in classifying mild cognitive impairment and early dementia, AUC = 0.70 for the DCR vs. 0.63 for the MMSE. DCR administration was also significantly faster (completed in less than 3 min regardless of cognitive status and age). Among 104 individuals who were labeled as “cognitively unimpaired” by the MMSE (score ≥ 28) but actually had verbal memory impairment as confirmed by the RAVLT, the DCR identified 84 (80.7%) as impaired. Moreover, the DCR score was significantly less biased by ethnicity than the MMSE, with no significant difference in the DCR score between Hispanic and non-Hispanic individuals. Conclusions DCR outperforms the MMSE in detecting and classifying cognitive impairment—in a fraction of the time—while being not influenced by a patient’s ethnicity. The results support the utility of DCR as a sensitive and efficient cognitive assessment in primary care settings. Trial registration ClinicalTrials.gov identifier NCT04733989.
Acceptable standards for clinic‐based digital cognitive assessments: Recommendations from the Global CEO Initiative on Alzheimer's Disease
The rising prevalence of mild cognitive impairment (MCI) and dementia, combined with persistent underdiagnosis, is driving an increased need for scalable cognitive assessment tools. Digital cognitive assessments (DCAs) offer a promising solution by addressing longstanding barriers to routine cognitive testing and diagnosis. However, variations in performance and intended use have created confusion about their clinical applications and utility. The Global CEO Initiative on Alzheimer's Disease convened a DCA Workgroup to define the preferred characteristics of DCAs to meet the needs of patients, health care providers, and regulators for three clinical contexts: (1) initial detection of cognitive impairment, (2) diagnostic support for MCI and dementia, and (3) characterization of cognitive profiles to support identifying etiology. In the near term, ensuring that DCAs meet or exceed the performance of non‐digital tools is a priority. DCAs must be validated in the intended use population with well‐characterized study samples and inclusive designs.
A Stable and Scalable Digital Composite Neurocognitive Test for Early Dementia Screening Based on Machine Learning: Model Development and Validation Study
Dementia has become a major public health concern due to its heavy disease burden. Mild cognitive impairment (MCI) is a transitional stage between healthy aging and dementia. Early identification of MCI is an essential step in dementia prevention. Based on machine learning (ML) methods, this study aimed to develop and validate a stable and scalable panel of cognitive tests for the early detection of MCI and dementia based on the Chinese Neuropsychological Consensus Battery (CNCB) in the Chinese Neuropsychological Normative Project (CN-NORM) cohort. CN-NORM was a nationwide, multicenter study conducted in China with 871 participants, including an MCI group (n=327, 37.5%), a dementia group (n=186, 21.4%), and a cognitively normal (CN) group (n=358, 41.1%). We used the following 4 algorithms to select candidate variables: the F-score according to the SelectKBest method, the area under the curve (AUC) from logistic regression (LR), P values from the logit method, and backward stepwise elimination. Different models were constructed after considering the administration duration and complexity of combinations of various tests. Receiver operating characteristic curve and AUC metrics were used to evaluate the discriminative ability of the models via stratified sampling cross-validation and LR and support vector classification (SVC) algorithms. This model was further validated in the Alzheimer's Disease Neuroimaging Initiative phase 3 (ADNI-3) cohort (N=743), which included 416 (56%) CN subjects, 237 (31.9%) patients with MCI, and 90 (12.1%) patients with dementia. Except for social cognition, all other domains in the CNCB differed between the MCI and CN groups (P<.008). In feature selection results regarding discrimination between the MCI and CN groups, the Hopkins Verbal Learning Test-5 minutes Recall had the best performance, with the highest mean AUC of up to 0.80 (SD 0.02) and an F-score of up to 258.70. The scalability of model 5 (Hopkins Verbal Learning Test-5 minutes Recall and Trail Making Test-B) was the lowest. Model 5 achieved a higher level of discrimination than the Hong Kong Brief Cognitive test score in distinguishing between the MCI and CN groups (P<.05). Model 5 also provided the highest sensitivity of up to 0.82 (range 0.72-0.92) and 0.83 (range 0.75-0.91) according to LR and SVC, respectively. This model yielded a similar robust discriminative performance in the ADNI-3 cohort regarding differentiation between the MCI and CN groups, with a mean AUC of up to 0.81 (SD 0) according to both LR and SVC algorithms. We developed a stable and scalable composite neurocognitive test based on ML that could differentiate not only between patients with MCI and controls but also between patients with different stages of cognitive impairment. This composite neurocognitive test is a feasible and practical digital biomarker that can potentially be used in large-scale cognitive screening and intervention studies.
Validity and usability for digital cognitive assessment tools to screen for mild cognitive impairment: a randomized crossover trial
Background The practicality of implementing digital cognitive screening tests in primary health care (PHC) for the detection of cognitive impairments, particularly among populations with lower education levels, remains unclear. The aim of this study is to assess the validity and usability of digital cognitive screening tests in PHC settings. Methods We utilized a randomized crossover design, whereby 47 community-dwelling participants aged 65 and above were randomized into two groups. One group completed the paper-based Mini-Mental State Examination (MMSE) and Clock Drawing Test (CDT) first, followed by the tablet-based digital version after a two-week washout period, while the other group did the reverse. Validity was assessed by Spearman correlation, linear mixed-effects models, sensitivity specificity, and area under the curve (AUC). Usability was assessed through the Usefulness, Satisfaction, and Ease of Use (USE) questionnaire, participant preferences and assessment duration. Regression analyses were conducted to explore the impact of usability on digital test scores, controlling for cognitive level, education, age, and gender. Results Regarding validity, digital tests showed moderate correlations with paper-based versions and superior AUC performance. The AUC was 0.65 for the MMSE versus 0.82 for the electronic MMSE (eMMSE), and 0.45 for the CDT compared to 0.65 for the electronic CDT (eCDT). Regarding usability, while older participants gave positive feedback on digital tests ( P  < 0.001), they preferred paper-based versions. The eMMSE took significantly longer to complete than the MMSE, averaging 7.11 min versus 6.21 min ( P  = 0.01). Notably, digital test scores were minimally affected by subjective attitudes but strongly linked to test duration ( β  = -0.62, 95% CI : -1.07 to -0.17). Conclusions Digital cognitive tests are valid and feasible in PHC settings but face implementation challenges, especially in usability and adaptability among individuals with lower education levels.