Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
64 result(s) for "Jin, Haomiao"
Sort by:
Resting Heart Rate Variability Measured by Consumer Wearables and Its Associations with Diverse Health Domains in Five Longitudinal Studies
Heart rate variability (HRV) is widely recognized as an indicator of general health, particularly time domain measures like the root mean square of successive differences (RMSSD) between consecutive heartbeats. Consumer wearables measuring HRV have potential for wide accessibility meaning that their broad use to capture HRV as a health biomarker is possible. Our objective was to investigate the validity of HRV measured by wearables as a general health indicator. We examined whether resting HRV assessed by wearables across five studies—two using smartwatches, two using heart rate chest straps, and one using a smartring—exhibited expected associations with diverse health domains, including mental, physical, behavioral, functional, and physiological. We focused on resting HRV measures recorded while in primarily stationary conditions, either upon waking or while sleeping, because such measures would theoretically reduce the effects of potential confounders such as movement artifacts, daytime caffeine intake, and postural changes. Wearables measured resting HRV had small-to-moderate associations with more clinically oriented and trait-like (or slow-changing) health measures like Hba1c (average blood glucose, r = −0.21, p = 0.014), depressive symptoms (r = −0.22, p = 0.024), and sleep difficulty (r = −0.11, p = 0.003). Wearable-measured resting HRV can potentially serve as a health biomarker, but further research is needed.
Text Messaging as a Screening Tool for Depression and Related Conditions in Underserved, Predominantly Minority Safety Net Primary Care Patients: Validity Study
SMS text messaging is an inexpensive, private, and scalable technology-mediated assessment mode that can alleviate many barriers faced by the safety net population to receive depression screening. Some existing studies suggest that technology-mediated assessment encourages self-disclosure of sensitive health information such as depressive symptoms while other studies show the opposite effect. This study aimed to evaluate the validity of using SMS text messaging to screen depression and related conditions, including anxiety and functional disability, in a low-income, culturally diverse safety net primary care population. This study used a randomized design with 4 study groups that permuted the order of SMS text messaging and the gold standard interview (INTW) assessment. The participants for this study were recruited from the participants of the prior Diabetes-Depression Care-management Adoption Trial (DCAT). Depression was screened by using the 2-item and 8-item Patient Health Questionnaire (PHQ-2 and PHQ-8, respectively). Anxiety was screened by using the 2-item Generalized Anxiety Disorder scale (GAD-2), and functional disability was assessed by using the Sheehan Disability Scale (SDS). Participants chose to take up the assessment in English or Spanish. Internal consistency and test-retest reliability were evaluated by using Cronbach alpha and intraclass correlation coefficient (ICC), respectively. Concordance was evaluated by using an ICC, a kappa statistic, an area under the receiver operating characteristic curve (AUROC), sensitivity, and specificity. A regression analysis was conducted to examine the association between the participant characteristics and the differences in the scores between the SMS text messaging and INTW assessment modes. Overall, 206 participants (average age 57.1 [SD 9.18] years; females: 119/206, 57.8%) were enrolled. All measurements except the SMS text messaging-assessed PHQ-2 showed Cronbach alpha values ≥.70, indicating acceptable to good internal consistency. All measurements except the INTW-assessed SDS had ICC values ≥0.75, indicating good to excellent test-retest reliability. For concordance, the PHQ-8 had an ICC of 0.73 and AUROC of 0.93, indicating good concordance. The kappa statistic, sensitivity, and specificity for major depression (PHQ-8 ≥8) were 0.43, 0.60, and 0.86, respectively. The concordance of the shorter PHQ-2, GAD-2, and SDS scales was poor to fair. The regression analysis revealed that a higher level of personal depression stigma was associated with reporting higher SMS text messaging-assessed PHQ-8 and GAD-2 scores than the INTW-assessed scores. The analysis also determined that the differences in the scores were associated with marital status and personality traits. Depression screening conducted using the longer PHQ-8 scale via SMS text messaging demonstrated good internal consistency, test-retest reliability, and concordance with the gold standard INTW assessment mode. However, care must be taken when deploying shorter scales via SMS text messaging. Further regression analysis supported that a technology-mediated assessment, such as SMS text messaging, may create a private space with less pressure from the personal depression stigma and therefore encourage self-disclosure of depressive symptoms. ClinicalTrials.gov NCT01781013; https://clinicaltrials.gov/ct2/show/NCT01781013. RR2-10.2196/12392.
Prevalence of DSM-5 mild and major neurocognitive disorder in India: Results from the LASI-DAD
India, with its rapidly aging population, faces an alarming burden of dementia. We implemented DSM-5 criteria in large-scale, nationally representative survey data in India to characterize the prevalence of mild and major Neurocognitive disorder. The Harmonized Diagnostic Assessment of Dementia for the Longitudinal Aging Study in India (LASI-DAD) (N = 4,096) is a nationally representative cohort study in India using multistage area probability sampling methods. Using neuropsychological testing and informant reports, we defined DSM-5 mild and major neurocognitive disorder, reported its prevalence, and evaluated criterion and construct validity of the algorithm using clinician-adjudicated Clinical Dementia Ratings (CDR)®. The prevalence of mild and major neurocognitive disorder, weighted to the population, is 17.6% and 7.2%. Demographic gradients with respect to age and education conform to hypothesized patterns. Among N = 2,390 participants with a clinician-adjudicated CDR, CDR ratings and DSM-5 classification agreed for N = 2,139 (89.5%) participants. The prevalence of dementia in India is higher than previously recognized. These findings, coupled with a growing number of older adults in the coming decades in India, have important implications for society, public health, and families. We are aware of no previous Indian population-representative estimates of mild cognitive impairment, a group which will be increasingly important in coming years to identify for potential therapeutic treatment.
Attrition from longitudinal ageing studies and performance across domains of cognitive functioning: an individual participant data meta-analysis
ObjectivesThis paper examined the magnitude of differences in performance across domains of cognitive functioning between participants who attrited from studies and those who did not, using data from longitudinal ageing studies where multiple cognitive tests were administered.DesignIndividual participant data meta-analysis.ParticipantsData are from 10 epidemiological longitudinal studies on ageing (total n=209 518) from several Western countries (UK, USA, Mexico, etc). Each study had multiple waves of data (range of 2–17 waves), with multiple cognitive tests administered at each wave (range of 4–17 tests). Only waves with cognitive tests and information on participant dropout at the immediate next wave for adults aged 50 years or older were used in the meta-analysis.MeasuresFor each pair of consecutive study waves, we compared the difference in cognitive scores (Cohen’s d) between participants who dropped out at the next study wave and those who remained. Note that our operationalisation of dropout was inclusive of all causes (eg, mortality). The proportion of participant dropout at each wave was also computed.ResultsThe average proportion of dropouts between consecutive study waves was 0.26 (0.18 to 0.34). People who attrited were found to have significantly lower levels of cognitive functioning in all domains (at the wave 2–3 years before attrition) compared with those who did not attrit, with small-to-medium effect sizes (overall d=0.37 (0.30 to 0.43)).ConclusionsOlder adults who attrited from longitudinal ageing studies had lower cognitive functioning (assessed at the timepoint before attrition) across all domains as compared with individuals who remained. Cognitive functioning differences may contribute to selection bias in longitudinal ageing studies, impeding accurate conclusions in developmental research. In addition, examining the functional capabilities of attriters may be valuable for determining whether attriters experience functional limitations requiring healthcare attention.
Dementia ascertainment in India and development of nation‐specific cutoffs: A machine learning and diagnostic analysis
Introduction Cognitive assessments are useful in ascertaining dementia but may be influenced by patient characteristics. India's distinct culture and demographics warrant investigation into population‐specific cutoffs. Methods Data were utilized from the Longitudinal Aging Study in India‐Diagnostic Assessment of Dementia (n = 2528). Dementia ascertainment was conducted by an online panel. A machine learning (ML) model was trained on these classifications, with explainable artificial intelligence to assess feature importance and inform cutoffs that were assessed across demographic groups. Results The Informant Questionnaire of Cognitive Decline in the Elderly (IQCODE) and Hindi Mini‐Mental State Examination (HMSE) were identified as the most impactful assessments with optimal cutoffs of 3.8 and 25, respectively. Discussion An ML assessment of clinician dementia ratings identified IQCODE and HMSE to be the most impactful assessments. Optimal cutoffs of 3.8 and 25 were identified and performed excellently in the overall sample, though did decrease in specific, more difficult‐to‐diagnose subgroups. Highlights Pioneers use of explainable artificial intelligence in the diagnosis of dementia. Creates assessment cutoffs specific to the nation of India. Highlights differences in cutoffs across nations.
Developing Early Markers of Cognitive Decline and Dementia Derived From Survey Response Behaviors: Protocol for Analyses of Preexisting Large-scale Longitudinal Data
Accumulating evidence shows that subtle alterations in daily functioning are among the earliest and strongest signals that predict cognitive decline and dementia. A survey is a small slice of everyday functioning; nevertheless, completing a survey is a complex and cognitively demanding task that requires attention, working memory, executive functioning, and short- and long-term memory. Examining older people's survey response behaviors, which focus on how respondents complete surveys irrespective of the content being sought by the questions, may represent a valuable but often neglected resource that can be leveraged to develop behavior-based early markers of cognitive decline and dementia that are cost-effective, unobtrusive, and scalable for use in large population samples. This paper describes the protocol of a multiyear research project funded by the US National Institute on Aging to develop early markers of cognitive decline and dementia derived from survey response behaviors at older ages. Two types of indices summarizing different aspects of older adults' survey response behaviors are created. Indices of subtle reporting mistakes are derived from questionnaire answer patterns in a number of population-based longitudinal aging studies. In parallel, para-data indices are generated from computer use behaviors recorded on the backend server of a large web-based panel study known as the Understanding America Study (UAS). In-depth examinations of the properties of the created questionnaire answer pattern and para-data indices will be conducted for the purpose of evaluating their concurrent validity, sensitivity to change, and predictive validity. We will synthesize the indices using individual participant data meta-analysis and conduct feature selection to identify the optimal combination of indices for predicting cognitive decline and dementia. As of October 2022, we have identified 15 longitudinal ageing studies as eligible data sources for creating questionnaire answer pattern indices and obtained para-data from 15 UAS surveys that were fielded from mid-2014 to 2015. A total of 20 questionnaire answer pattern indices and 20 para-data indices have also been identified. We have conducted a preliminary investigation to test the utility of the questionnaire answer patterns and para-data indices for the prediction of cognitive decline and dementia. These early results are based on only a subset of indices but are suggestive of the findings that we anticipate will emerge from the planned analyses of multiple behavioral indices derived from many diverse studies. Survey response behaviors are a relatively inexpensive data source, but they are seldom used directly for epidemiological research on cognitive impairment at older ages. This study is anticipated to develop an innovative yet unconventional approach that may complement existing approaches aimed at the early detection of cognitive decline and dementia. DERR1-10.2196/44627.
Early Identification of Cognitive Impairment in Community Environments Through Modeling Subtle Inconsistencies in Questionnaire Responses: Machine Learning Model Development and Validation
The underdiagnosis of cognitive impairment hinders timely intervention of dementia. Health professionals working in the community play a critical role in the early detection of cognitive impairment, yet still face several challenges such as a lack of suitable tools, necessary training, and potential stigmatization. This study explored a novel application integrating psychometric methods with data science techniques to model subtle inconsistencies in questionnaire response data for early identification of cognitive impairment in community environments. This study analyzed questionnaire response data from participants aged 50 years and older in the Health and Retirement Study (waves 8-9, n=12,942). Predictors included low-quality response indices generated using the graded response model from four brief questionnaires (optimism, hopelessness, purpose in life, and life satisfaction) assessing aspects of overall well-being, a focus of health professionals in communities. The primary and supplemental predicted outcomes were current cognitive impairment derived from a validated criterion and dementia or mortality in the next ten years. Seven predictive models were trained, and the performance of these models was evaluated and compared. The multilayer perceptron exhibited the best performance in predicting current cognitive impairment. In the selected four questionnaires, the area under curve values for identifying current cognitive impairment ranged from 0.63 to 0.66 and was improved to 0.71 to 0.74 when combining the low-quality response indices with age and gender for prediction. We set the threshold for assessing cognitive impairment risk in the tool based on the ratio of underdiagnosis costs to overdiagnosis costs, and a ratio of 4 was used as the default choice. Furthermore, the tool outperformed the efficiency of age or health-based screening strategies for identifying individuals at high risk for cognitive impairment, particularly in the 50- to 59-year and 60- to 69-year age groups. The tool is available on a portal website for the public to access freely. We developed a novel prediction tool that integrates psychometric methods with data science to facilitate \"passive or backend\" cognitive impairment assessments in community settings, aiming to promote early cognitive impairment detection. This tool simplifies the cognitive impairment assessment process, making it more adaptable and reducing burdens. Our approach also presents a new perspective for using questionnaire data: leveraging, rather than dismissing, low-quality data.
Reliability and Validity of Noncognitive Ecological Momentary Assessment Survey Response Times as an Indicator of Cognitive Processing Speed in People’s Natural Environment: Intensive Longitudinal Study
Various populations with chronic conditions are at risk for decreased cognitive performance, making assessment of their cognition important. Formal mobile cognitive assessments measure cognitive performance with greater ecological validity than traditional laboratory-based testing but add to participant task demands. Given that responding to a survey is considered a cognitively demanding task itself, information that is passively collected as a by-product of ecological momentary assessment (EMA) may be a means through which people's cognitive performance in their natural environment can be estimated when formal ambulatory cognitive assessment is not feasible. We specifically examined whether the item response times (RTs) to EMA questions (eg, mood) can serve as approximations of cognitive processing speed. This study aims to investigate whether the RTs from noncognitive EMA surveys can serve as approximate indicators of between-person (BP) differences and momentary within-person (WP) variability in cognitive processing speed. Data from a 2-week EMA study investigating the relationships among glucose, emotion, and functioning in adults with type 1 diabetes were analyzed. Validated mobile cognitive tests assessing processing speed (Symbol Search task) and sustained attention (Go-No Go task) were administered together with noncognitive EMA surveys 5 to 6 times per day via smartphones. Multilevel modeling was used to examine the reliability of EMA RTs, their convergent validity with the Symbol Search task, and their divergent validity with the Go-No Go task. Other tests of the validity of EMA RTs included the examination of their associations with age, depression, fatigue, and the time of day. Overall, in BP analyses, evidence was found supporting the reliability and convergent validity of EMA question RTs from even a single repeatedly administered EMA item as a measure of average processing speed. BP correlations between the Symbol Search task and EMA RTs ranged from 0.43 to 0.58 (P<.001). EMA RTs had significant BP associations with age (P<.001), as expected, but not with depression (P=.20) or average fatigue (P=.18). In WP analyses, the RTs to 16 slider items and all 22 EMA items (including the 16 slider items) had acceptable (>0.70) WP reliability. After correcting for unreliability in multilevel models, EMA RTs from most combinations of items showed moderate WP correlations with the Symbol Search task (ranged from 0.29 to 0.58; P<.001) and demonstrated theoretically expected relationships with momentary fatigue and the time of day. The associations between EMA RTs and the Symbol Search task were greater than those between EMA RTs and the Go-No Go task at both the BP and WP levels, providing evidence of divergent validity. Assessing the RTs to EMA items (eg, mood) may be a method of approximating people's average levels of and momentary fluctuations in processing speed without adding tasks beyond the survey questions.
Inferring Cognitive Abilities from Response Times to Web-Administered Survey Items in a Population-Representative Sample
Monitoring of cognitive abilities in large-scale survey research is receiving increasing attention. Conventional cognitive testing, however, is often impractical on a population level highlighting the need for alternative means of cognitive assessment. We evaluated whether response times (RTs) to online survey items could be useful to infer cognitive abilities. We analyzed >5 million survey item RTs from >6000 individuals administered over 6.5 years in an internet panel together with cognitive tests (numerical reasoning, verbal reasoning, task switching/inhibitory control). We derived measures of mean RT and intraindividual RT variability from a multilevel location-scale model as well as an expanded version that separated intraindividual RT variability into systematic RT adjustments (variation of RTs with item time intensities) and residual intraindividual RT variability (residual error in RTs). RT measures from the location-scale model showed weak associations with cognitive test scores. However, RT measures from the expanded model explained 22–26% of the variance in cognitive scores and had prospective associations with cognitive assessments over lag-periods of at least 6.5 years (mean RTs), 4.5 years (systematic RT adjustments) and 1 year (residual RT variability). Our findings suggest that RTs in online surveys may be useful for gaining information about cognitive abilities in large-scale survey research.
Function and Emotion in Everyday Life With Type 1 Diabetes (FEEL-T1D): Protocol for a Fully Remote Intensive Longitudinal Study
Although short-term blood glucose levels and variability are thought to underlie diminished function and emotional well-being in people with type 1 diabetes (T1D), these relationships are poorly understood. The Function and Emotion in Everyday Life with T1D (FEEL-T1D) study focuses on investigating these short-term dynamic relationships among blood glucose levels, functional ability, and emotional well-being in adults with T1D. The aim of this study is to present the FEEL-T1D study design, methods, and study progress to date, including adaptations necessitated by the COVID-19 pandemic to implement the study fully remotely. The FEEL-T1D study will recruit 200 adults with T1D in the age range of 18-75 years. Data collection includes a comprehensive survey battery, along with 14 days of intensive longitudinal data using blinded continuous glucose monitoring, ecological momentary assessments, ambulatory cognitive tasks, and accelerometers. All study procedures are conducted remotely by mailing the study equipment and by using videoconferencing for study visits. The study received institutional review board approval in January 2019 and was funded in April 2019. Data collection began in June 2020 and is projected to end in December 2021. As of June 2021, after 12 months of recruitment, 124 participants have enrolled in the FEEL-T1D study. Approximately 87.6% (7082/8087) of ecological momentary assessment surveys have been completed with minimal missing data, and 82.0% (82/100) of the participants provided concurrent continuous glucose monitoring data, ecological momentary assessment data, and accelerometer data for at least 10 of the 14 days of data collection. Thus far, our reconfiguration of the FEEL-T1D protocol to be implemented remotely during the COVID-19 pandemic has been a success. The FEEL-T1D study will elucidate the dynamic relationships among blood glucose levels, emotional well-being, cognitive function, and participation in daily activities. In doing so, it will pave the way for innovative just-in-time interventions and produce actionable insights to facilitate tailoring of diabetes treatments to optimize the function and well-being of individuals with T1D. DERR1-10.2196/30901.