Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
99 result(s) for "automated speech analysis"
Sort by:
Detecting fatigue in multiple sclerosis through automatic speech analysis
Multiple sclerosis (MS) is a chronic neuroinflammatory disease characterized by central nervous system demyelination and axonal degeneration. Fatigue affects a major portion of MS patients, significantly impairing their daily activities and quality of life. Despite its prevalence, the mechanisms underlying fatigue in MS are poorly understood, and measuring fatigue remains a challenging task. This study evaluates the efficacy of automated speech analysis in detecting fatigue in MS patients. MS patients underwent a detailed clinical assessment and performed a comprehensive speech protocol. Using features from three different free speech tasks and a proprietary cognition score, our support vector machine model achieved an AUC on the ROC of 0.74 in detecting fatigue. Using only free speech features evoked from a picture description task we obtained an AUC of 0.68. This indicates that specific free speech patterns can be useful in detecting fatigue. Moreover, cognitive fatigue was significantly associated with lower speech ratio in free speech ( ρ  = −0.283, p  = 0.001), suggesting that it may represent a specific marker of fatigue in MS patients. Together, our results show that automated speech analysis, of a single narrative free speech task, offers an objective, ecologically valid and low-burden method for fatigue assessment. Speech analysis tools offer promising potential applications in clinical practice for improving disease monitoring and management.
Correlating natural language processing and automated speech analysis with clinician assessment to quantify speech-language changes in mild cognitive impairment and Alzheimer’s dementia
Background Language impairment is an important marker of neurodegenerative disorders. Despite this, there is no universal system of terminology used to describe these impairments and large inter-rater variability can exist between clinicians assessing language. The use of natural language processing (NLP) and automated speech analysis (ASA) is emerging as a novel and potentially more objective method to assess language in individuals with mild cognitive impairment (MCI) and Alzheimer’s dementia (AD). No studies have analyzed how variables extracted through NLP and ASA might also be correlated to language impairments identified by a clinician. Methods Audio recordings (n=30) from participants with AD, MCI, and controls were rated by clinicians for word-finding difficulty, incoherence, perseveration, and errors in speech. Speech recordings were also transcribed, and linguistic and acoustic variables were extracted through NLP and ASA. Correlations between clinician-rated speech characteristics and the variables were compared using Spearman’s correlation. Exploratory factor analysis was applied to find common factors between variables for each speech characteristic. Results Clinician agreement was high in three of the four speech characteristics: word-finding difficulty (ICC = 0.92, p<0.001), incoherence (ICC = 0.91, p<0.001), and perseveration (ICC = 0.88, p<0.001). Word-finding difficulty and incoherence were useful constructs at distinguishing MCI and AD from controls, while perseveration and speech errors were less relevant. Word-finding difficulty as a construct was explained by three factors, including number and duration of pauses, word duration, and syntactic complexity. Incoherence was explained by two factors, including increased average word duration, use of past tense, and changes in age of acquisition, and more negative valence. Conclusions Variables extracted through automated acoustic and linguistic analysis of MCI and AD speech were significantly correlated with clinician ratings of speech and language characteristics. Our results suggest that correlating NLP and ASA with clinician observations is an objective and novel approach to measuring speech and language changes in neurodegenerative disorders.
Detecting subtle signs of depression with automated speech analysis in a non-clinical sample
Background Automated speech analysis has gained increasing attention to help diagnosing depression. Most previous studies, however, focused on comparing speech in patients with major depressive disorder to that in healthy volunteers. An alternative may be to associate speech with depressive symptoms in a non-clinical sample as this may help to find early and sensitive markers in those at risk of depression. Methods We included n =  118 healthy young adults (mean age: 23.5 ± 3.7 years; 77% women) and asked them to talk about a positive and a negative event in their life. Then, we assessed the level of depressive symptoms with a self-report questionnaire, with scores ranging from 0–60. We transcribed speech data and extracted acoustic as well as linguistic features. Then, we tested whether individuals below or above the cut-off of clinically relevant depressive symptoms differed in speech features. Next, we predicted whether someone would be below or above that cut-off as well as the individual scores on the depression questionnaire. Since depression is associated with cognitive slowing or attentional deficits, we finally correlated depression scores with performance in the Trail Making Test. Results In our sample, n =  93 individuals scored below and n =  25 scored above cut-off for clinically relevant depressive symptoms. Most speech features did not differ significantly between both groups, but individuals above cut-off spoke more than those below that cut-off in the positive and the negative story. In addition, higher depression scores in that group were associated with slower completion time of the Trail Making Test. We were able to predict with 93% accuracy who would be below or above cut-off. In addition, we were able to predict the individual depression scores with low mean absolute error (3.90), with best performance achieved by a support vector machine. Conclusions Our results indicate that even in a sample without a clinical diagnosis of depression, changes in speech relate to higher depression scores. This should be investigated in more detail in the future. In a longitudinal study, it may be tested whether speech features found in our study represent early and sensitive markers for subsequent depression in individuals at risk.
Automatic classification of AD pathology in FTD phenotypes using natural speech
INTRODUCTION Screening for Alzheimer's disease neuropathologic change (ADNC) in individuals with atypical presentations is challenging but essential for clinical management. We trained automatic speech‐based classifiers to distinguish frontotemporal dementia (FTD) patients with ADNC from those with frontotemporal lobar degeneration (FTLD). METHODS We trained automatic classifiers with 99 speech features from 1 minute speech samples of 179 participants (ADNC = 36, FTLD = 60, healthy controls [HC] = 89). Patients’ pathology was assigned based on autopsy or cerebrospinal fluid analytes. Structural network‐based magnetic resonance imaging analyses identified anatomical correlates of distinct speech features. RESULTS Our classifier showed 0.88 ± $ \\pm $0.03 area under the curve (AUC) for ADNC versus FTLD and 0.93 ± $ \\pm $0.04 AUC for patients versus HC. Noun frequency and pause rate correlated with gray matter volume loss in the limbic and salience networks, respectively. DISCUSSION Brief naturalistic speech samples can be used for screening FTD patients for underlying ADNC in vivo. This work supports the future development of digital assessment tools for FTD. Highlights We trained machine learning classifiers for frontotemporal dementia patients using natural speech. We grouped participants by neuropathological diagnosis (autopsy) or cerebrospinal fluid biomarkers. Classifiers well distinguished underlying pathology (Alzheimer's disease vs. frontotemporal lobar degeneration) in patients. We identified important features through an explainable artificial intelligence approach. This work lays the groundwork for a speech‐based neuropathology screening tool.
Automated text‐level semantic markers of Alzheimer's disease
Introduction Automated speech analysis has emerged as a scalable, cost‐effective tool to identify persons with Alzheimer's disease dementia (ADD). Yet, most research is undermined by low interpretability and specificity. Methods Combining statistical and machine learning analyses of natural speech data, we aimed to discriminate ADD patients from healthy controls (HCs) based on automated measures of domains typically affected in ADD: semantic granularity (coarseness of concepts) and ongoing semantic variability (conceptual closeness of successive words). To test for specificity, we replicated the analyses on Parkinson's disease (PD) patients. Results Relative to controls, ADD (but not PD) patients exhibited significant differences in both measures. Also, these features robustly discriminated between ADD patients and HC, while yielding near‐chance classification between PD patients and HCs. Discussion Automated discourse‐level semantic analyses can reveal objective, interpretable, and specific markers of ADD, bridging well‐established neuropsychological targets with digital assessment tools.
The classification of mild cognitive impairment or healthy ageing improves when including practice effects derived from a semantic verbal fluency task
INTRODUCTION Practice effects are an improvement in task performance with repeated testing. Their absence may indicate compromised learning and may help discriminate healthy from pathological ageing. METHODS We recorded semantic verbal fluency three times in n = 58 healthy older adults or patients with amnestic mild cognitive impairment (MCI) (72.16 ± 4.83 years old, 33 women). We extracted speech features and trained a machine learning classifier on them at each cognitive assessment. We examined which variables were informative for classification and whether they correlated with episodic memory performance. RESULTS We found smaller practice effects in patients with amnestic MCI. There was a 13% improvement in classification performance with features from the third cognitive assessment as compared to the first assessment. Practice effects correlated with episodic memory performance in healthy adults. DISCUSSION Speech features became more informative for classification when repeatedly assessed. They may be a promising tool for identifying individuals at risk of cognitive decline. Highlights In MCI, practice effects in verbal fluency tasks were smaller than in healthy adults. Smaller practice effects in MCI indicated compromised learning. Including practice effects improved the classification of MCI vs. healthy ageing. In MCI, practice effects were independent of episodic memory performance.
Validation of an Automated Speech Analysis of Cognitive Tasks within a Semiautomated Phone Assessment
Introduction: We studied the accuracy of the automatic speech recognition (ASR) software by comparing ASR scores with manual scores from a verbal learning test (VLT) and a semantic verbal fluency (SVF) task in a semiautomated phone assessment in a memory clinic population. Furthermore, we examined the differentiating value of these tests between participants with subjective cognitive decline (SCD) and mild cognitive impairment (MCI). We also investigated whether the automatically calculated speech and linguistic features had an additional value compared to the commonly used total scores in a semiautomated phone assessment. Methods: We included 94 participants from the memory clinic of the Maastricht University Medical Center+ (SCD N = 56 and MCI N = 38). The test leader guided the participant through a semiautomated phone assessment. The VLT and SVF were audio recorded and processed via a mobile application. The recall count and speech and linguistic features were automatically extracted. The diagnostic groups were classified by training machine learning classifiers to differentiate SCD and MCI participants. Results: The intraclass correlation for inter-rater reliability between the manual and the ASR total word count was 0.89 (95% CI 0.09–0.97) for the VLT immediate recall, 0.94 (95% CI 0.68–0.98) for the VLT delayed recall, and 0.93 (95% CI 0.56–0.97) for the SVF. The full model including the total word count and speech and linguistic features had an area under the curve of 0.81 and 0.77 for the VLT immediate and delayed recall, respectively, and 0.61 for the SVF. Conclusion: There was a high agreement between the ASR and manual scores, keeping the broad confidence intervals in mind. The phone-based VLT was able to differentiate between SCD and MCI and can have opportunities for clinical trial screening.
Vocal Interaction Between Children With Down Syndrome and Their Parents
The purpose of this study was to describe differences in parent input and child vocal behaviors of children with Down syndrome (DS) compared with typically developing (TD) children. The goals were to describe the language learning environments at distinctly different ages in early childhood. Nine children with DS and 9 age-matched TD children participated; 4 children in each group were ages 9-11 months, and 5 were between 25 and 54 months. Measures were derived from automated vocal analysis. A digital language processor measured the richness of the child's language environment, including number of adult words, conversational turns, and child vocalizations. Analyses indicated no significant differences in words spoken by parents of younger versus older children with DS and significantly more words spoken by parents of TD children than parents of children with DS. Differences between the DS and TD groups were observed in rates of all vocal behaviors, with no differences noted between the younger versus older children with DS, and the younger TD children did not vocalize significantly more than the younger DS children. Parents of children with DS continue to provide consistent levels of input across the early language learning years; however, child vocal behaviors remain low after the age of 24 months, suggesting the need for additional and alternative intervention approaches.
Effects of Topiramate on Cognitive Function
Investigators at the Universities of Minnesota and Florida determined the effect of topiramate on linguistic behavior, verbal recall and working memory using a computational linguistics system for automated language and speech analysis (SALSA).
Automated vocal analysis of naturalistic recordings from children with autism, language delay, and typical development
For generations the study of vocal development and its role in language has been conducted laboriously, with human transcribers and analysts coding and taking measurements from small recorded samples. Our research illustrates a method to obtain measures of early speech development through automated analysis of massive quantities of day-long audio recordings collected naturalistically in children's homes. A primary goal is to provide insights into the development of infant control over infrastructural characteristics of speech through large-scale statistical analysis of strategically selected acoustic parameters. In pursuit of this goal we have discovered that the first automated approach we implemented is not only able to track children's development on acoustic parameters known to play key roles in speech, but also is able to differentiate vocalizations from typically developing children and children with autism or language delay. The method is totally automated, with no human intervention, allowing efficient sampling and analysis at unprecedented scales. The work shows the potential to fundamentally enhance research in vocal development and to add a fully objective measure to the battery used to detect speech-related disorders in early childhood. Thus, automated analysis should soon be able to contribute to screening and diagnosis procedures for early disorders, and more generally, the findings suggest fundamental methods for the study of language in natural environments.