Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
33 result(s) for "Linz, Nicklas"
Sort by:
Detecting subtle signs of depression with automated speech analysis in a non-clinical sample
Background Automated speech analysis has gained increasing attention to help diagnosing depression. Most previous studies, however, focused on comparing speech in patients with major depressive disorder to that in healthy volunteers. An alternative may be to associate speech with depressive symptoms in a non-clinical sample as this may help to find early and sensitive markers in those at risk of depression. Methods We included n =  118 healthy young adults (mean age: 23.5 ± 3.7 years; 77% women) and asked them to talk about a positive and a negative event in their life. Then, we assessed the level of depressive symptoms with a self-report questionnaire, with scores ranging from 0–60. We transcribed speech data and extracted acoustic as well as linguistic features. Then, we tested whether individuals below or above the cut-off of clinically relevant depressive symptoms differed in speech features. Next, we predicted whether someone would be below or above that cut-off as well as the individual scores on the depression questionnaire. Since depression is associated with cognitive slowing or attentional deficits, we finally correlated depression scores with performance in the Trail Making Test. Results In our sample, n =  93 individuals scored below and n =  25 scored above cut-off for clinically relevant depressive symptoms. Most speech features did not differ significantly between both groups, but individuals above cut-off spoke more than those below that cut-off in the positive and the negative story. In addition, higher depression scores in that group were associated with slower completion time of the Trail Making Test. We were able to predict with 93% accuracy who would be below or above cut-off. In addition, we were able to predict the individual depression scores with low mean absolute error (3.90), with best performance achieved by a support vector machine. Conclusions Our results indicate that even in a sample without a clinical diagnosis of depression, changes in speech relate to higher depression scores. This should be investigated in more detail in the future. In a longitudinal study, it may be tested whether speech features found in our study represent early and sensitive markers for subsequent depression in individuals at risk.
Measuring neuropsychiatric symptoms in patients with early cognitive decline using speech analysis
Certain neuropsychiatric symptoms (NPS), namely apathy, depression, and anxiety demonstrated great value in predicting dementia progression, representing eventually an opportunity window for timely diagnosis and treatment. However, sensitive and objective markers of these symptoms are still missing. Therefore, the present study aims to investigate the association between automatically extracted speech features and NPS in patients with mild neurocognitive disorders. Speech of 141 patients aged 65 or older with neurocognitive disorder was recorded while performing two short narrative speech tasks. NPS were assessed by the neuropsychiatric inventory. Paralinguistic markers relating to prosodic, formant, source, and temporal qualities of speech were automatically extracted, correlated with NPS. Machine learning experiments were carried out to validate the diagnostic power of extracted markers. Different speech variables are associated with specific NPS; apathy correlates with temporal aspects, and anxiety with voice quality-and this was mostly consistent between male and female after correction for cognitive impairment. Machine learning regressors are able to extract information from speech features and perform above baseline in predicting anxiety, apathy, and depression scores. Different NPS seem to be characterized by distinct speech features, which are easily extractable automatically from short vocal tasks. These findings support the use of speech analysis for detecting subtypes of NPS in patients with cognitive impairment. This could have great implications for the design of future clinical trials as this cost-effective method could allow more continuous and even remote monitoring of symptoms.
Remote data collection speech analysis and prediction of the identification of Alzheimer’s disease biomarkers in people at risk for Alzheimer’s disease dementia: the Speech on the Phone Assessment (SPeAk) prospective observational study protocol
IntroductionIdentifying cost-effective, non-invasive biomarkers of Alzheimer’s disease (AD) is a clinical and research priority. Speech data are easy to collect, and studies suggest it can identify those with AD. We do not know if speech features can predict AD biomarkers in a preclinical population.Methods and analysisThe Speech on the Phone Assessment (SPeAk) study is a prospective observational study. SPeAk recruits participants aged 50 years and over who have previously completed studies with AD biomarker collection. Participants complete a baseline telephone assessment, including spontaneous speech and cognitive tests. A 3-month visit will repeat the cognitive tests with a conversational artificial intelligence bot. Participants complete acceptability questionnaires after each visit. Participants are randomised to receive their cognitive test results either after each visit or only after they have completed the study. We will combine SPeAK data with AD biomarker data collected in a previous study and analyse for correlations between extracted speech features and AD biomarkers. The outcome of this analysis will inform the development of an algorithm for prediction of AD risk based on speech features.Ethics and disseminationThis study has been approved by the Edinburgh Medical School Research Ethics Committee (REC reference 20-EMREC-007). All participants will provide informed consent before completing any study-related procedures, participants must have capacity to consent to participate in this study. Participants may find the tests, or receiving their scores, causes anxiety or stress. Previous exposure to similar tests may make this more familiar and reduce this anxiety. The study information will include signposting in case of distress. Study results will be disseminated to study participants, presented at conferences and published in a peer reviewed journal. No study participants will be identifiable in the study results.
Detecting fatigue in multiple sclerosis through automatic speech analysis
Multiple sclerosis (MS) is a chronic neuroinflammatory disease characterized by central nervous system demyelination and axonal degeneration. Fatigue affects a major portion of MS patients, significantly impairing their daily activities and quality of life. Despite its prevalence, the mechanisms underlying fatigue in MS are poorly understood, and measuring fatigue remains a challenging task. This study evaluates the efficacy of automated speech analysis in detecting fatigue in MS patients. MS patients underwent a detailed clinical assessment and performed a comprehensive speech protocol. Using features from three different free speech tasks and a proprietary cognition score, our support vector machine model achieved an AUC on the ROC of 0.74 in detecting fatigue. Using only free speech features evoked from a picture description task we obtained an AUC of 0.68. This indicates that specific free speech patterns can be useful in detecting fatigue. Moreover, cognitive fatigue was significantly associated with lower speech ratio in free speech ( ρ  = −0.283, p  = 0.001), suggesting that it may represent a specific marker of fatigue in MS patients. Together, our results show that automated speech analysis, of a single narrative free speech task, offers an objective, ecologically valid and low-burden method for fatigue assessment. Speech analysis tools offer promising potential applications in clinical practice for improving disease monitoring and management.
An automatic measure for speech intelligibility in dysarthrias—validation across multiple languages and neurological disorders
Dysarthria, a motor speech disorder caused by muscle weakness or paralysis, severely impacts speech intelligibility and quality of life. The condition is prevalent in motor speech disorders such as Parkinson's disease (PD), atypical parkinsonism such as progressive supranuclear palsy (PSP), Huntington's disease (HD), and amyotrophic lateral sclerosis (ALS). Improving intelligibility is not only an outcome that matters to patients but can also play a critical role as an endpoint in clinical research and drug development. This study validates a digital measure for speech intelligibility, the ki: SB-M intelligibility score, across various motor speech disorders and languages following the Digital Medicine Society (DiMe) V3 framework. The study used four datasets: healthy controls (HCs) and patients with PD, HD, PSP, and ALS from Czech, Colombian, and German populations. Participants' speech intelligibility was assessed using the ki: SB-M intelligibility score, which is derived from automatic speech recognition (ASR) systems. Verification with inter-ASR reliability and temporal consistency, analytical validation with correlations to gold standard clinical dysarthria scores in each disease, and clinical validation with group comparisons between HCs and patients were performed. Verification showed good to excellent inter-rater reliability between ASR systems and fair to good consistency. Analytical validation revealed significant correlations between the SB-M intelligibility score and established clinical measures for speech impairments across all patient groups and languages. Clinical validation demonstrated significant differences in intelligibility scores between pathological groups and healthy controls, indicating the measure's discriminative capability. The ki: SB-M intelligibility score is a reliable, valid, and clinically relevant tool for assessing speech intelligibility in motor speech disorders. It holds promise for improving clinical trials through automated, objective, and scalable assessments. Future studies should explore its utility in monitoring disease progression and therapeutic efficacy as well as add data from further dysarthrias to the validation.
Remote cognitive assessment of older adults in rural areas by telemedicine and automatic speech and video analysis: protocol for a cross-over feasibility study
IntroductionEarly detection of cognitive impairments is crucial for the successful implementation of preventive strategies. However, in rural isolated areas or so-called ‘medical deserts’, access to diagnosis and care is very limited. With the current pandemic crisis, now even more than ever, remote solutions such as telemedicine platforms represent great potential and can help to overcome this barrier. Moreover, current advances made in voice and image analysis can help overcome the barrier of physical distance by providing additional information on a patients’ emotional and cognitive state. Therefore, the aim of this study is to evaluate the feasibility and reliability of a videoconference system for remote cognitive testing empowered by automatic speech and video analysis.Methods and analysis60 participants (aged 55 and older) with and without cognitive impairment will be recruited. A complete neuropsychological assessment including a short clinical interview will be administered in two conditions, once by telemedicine and once by face-to-face. The order of administration procedure will be counterbalanced so half of the sample starts with the videoconference condition and the other half with the face-to-face condition. Acceptability and user experience will be assessed among participants and clinicians in a qualitative and quantitative manner. Speech and video features will be extracted and analysed to obtain additional information on mood and engagement levels. In a subgroup, measurements of stress indicators such as heart rate and skin conductance will be compared.Ethics and disseminationThe procedures are not invasive and there are no expected risks or burdens to participants. All participants will be informed that this is an observational study and their consent taken prior to the experiment. Demonstration of the effectiveness of such technology makes it possible to diffuse its use across all rural areas (‘medical deserts’) and thus, to improve the early diagnosis of neurodegenerative pathologies, while providing data crucial for basic research. Results from this study will be published in peer-reviewed journals.
Validation of the Remote Automated ki:e Speech Biomarker for Cognition in Mild Cognitive Impairment: Verification and Validation following DiME V3 Framework
Introduction: Progressive cognitive decline is the cardinal behavioral symptom in most dementia-causing diseases such as Alzheimer’s disease. While most well-established measures for cognition might not fit tomorrow’s decentralized remote clinical trials, digital cognitive assessments will gain importance. We present the evaluation of a novel digital speech biomarker for cognition (SB-C) following the Digital Medicine Society’s V3 framework: verification, analytical validation, and clinical validation. Methods: Evaluation was done in two independent clinical samples: the Dutch DeepSpA (N = 69 subjective cognitive impairment [SCI], N = 52 mild cognitive impairment [MCI], and N = 13 dementia) and the Scottish SPeAk datasets (N = 25, healthy controls). For validation, two anchor scores were used: the Mini-Mental State Examination (MMSE) and the Clinical Dementia Rating (CDR) scale. Results: Verification: The SB-C could be reliably extracted for both languages using an automatic speech processing pipeline. Analytical Validation: In both languages, the SB-C was strongly correlated with MMSE scores. Clinical Validation: The SB-C significantly differed between clinical groups (including MCI and dementia), was strongly correlated with the CDR, and could track the clinically meaningful decline. Conclusion: Our results suggest that the ki:e SB-C is an objective, scalable, and reliable indicator of cognitive decline, fit for purpose as a remote assessment in clinical early dementia trials.
Automated remote speech‐based testing of individuals with cognitive decline: Bayesian agreement of transcription accuracy
Introduction We investigated the agreement between automated and gold‐standard manual transcriptions of telephone chatbot‐based semantic verbal fluency testing. Methods We examined 78 cases from the Screening over Speech in Unselected Populations for Clinical Trials in AD (PROSPECT‐AD) study, including cognitively normal individuals and individuals with subjective cognitive decline, mild cognitive impairment, and dementia. We used Bayesian Bland–Altman analysis of word count and the qualitative features of semantic cluster size, cluster switches, and word frequencies. Results We found high levels of agreement for word count, with a 93% probability of a newly observed difference being below the minimally important difference. The qualitative features had fair levels of agreement. Word count reached high levels of discrimination between cognitively impaired and unimpaired individuals, regardless of transcription mode. Discussion Our results support the use of automated speech recognition particularly for the assessment of quantitative speech features, even when using data from telephone calls with cognitively impaired individuals in their homes. Highlights High levels of agreement were found between automated and gold‐standard manual transcriptions of telephone chatbot‐based semantic verbal fluency testing, particularly for word count. The qualitative features had fair levels of agreement. Word count reached high levels of discrimination between cognitively impaired and unimpaired individuals, regardless of transcription mode. Automated speech recognition for the assessment of quantitative and qualitative speech features, even when using data from telephone calls with cognitively impaired individuals in their homes, seems feasible and reliable.
Validation of an Automated Speech Analysis of Cognitive Tasks within a Semiautomated Phone Assessment
Introduction: We studied the accuracy of the automatic speech recognition (ASR) software by comparing ASR scores with manual scores from a verbal learning test (VLT) and a semantic verbal fluency (SVF) task in a semiautomated phone assessment in a memory clinic population. Furthermore, we examined the differentiating value of these tests between participants with subjective cognitive decline (SCD) and mild cognitive impairment (MCI). We also investigated whether the automatically calculated speech and linguistic features had an additional value compared to the commonly used total scores in a semiautomated phone assessment. Methods: We included 94 participants from the memory clinic of the Maastricht University Medical Center+ (SCD N = 56 and MCI N = 38). The test leader guided the participant through a semiautomated phone assessment. The VLT and SVF were audio recorded and processed via a mobile application. The recall count and speech and linguistic features were automatically extracted. The diagnostic groups were classified by training machine learning classifiers to differentiate SCD and MCI participants. Results: The intraclass correlation for inter-rater reliability between the manual and the ASR total word count was 0.89 (95% CI 0.09–0.97) for the VLT immediate recall, 0.94 (95% CI 0.68–0.98) for the VLT delayed recall, and 0.93 (95% CI 0.56–0.97) for the SVF. The full model including the total word count and speech and linguistic features had an area under the curve of 0.81 and 0.77 for the VLT immediate and delayed recall, respectively, and 0.61 for the SVF. Conclusion: There was a high agreement between the ASR and manual scores, keeping the broad confidence intervals in mind. The phone-based VLT was able to differentiate between SCD and MCI and can have opportunities for clinical trial screening.