Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
2,885 result(s) for "Alexander, Daniel C."
Sort by:
SANDI: A compartment-based model for non-invasive apparent soma and neurite imaging by diffusion MRI
This work introduces a compartment-based model for apparent cell body (namely soma) and neurite density imaging (SANDI) using non-invasive diffusion-weighted MRI (DW-MRI). The existing conjecture in brain microstructure imaging through DW-MRI presents water diffusion in white (WM) and gray (GM) matter as restricted diffusion in neurites, modelled by infinite cylinders of null radius embedded in the hindered extra-neurite water. The extra-neurite pool in WM corresponds to water in the extra-axonal space, but in GM it combines water in the extra-cellular space with water in soma. While several studies showed that this microstructure model successfully describe DW-MRI data in WM and GM at b ​≤ ​3,000 ​s/mm2 (or 3 ​ms/μm2), it has been also shown to fail in GM at high b values (b≫3,000 ​s/mm2 or 3 ​ms/μm2). Here we hypothesise that the unmodelled soma compartment (i.e. cell body of any brain cell type: from neuroglia to neurons) may be responsible for this failure and propose SANDI as a new model of brain microstructure where soma of any brain cell type is explicitly included. We assess the effects of size and density of soma on the direction-averaged DW-MRI signal at high b values and the regime of validity of the model using numerical simulations and comparison with experimental data from mouse (bmax ​= ​40,000 ​s/mm2, or 40 ​ms/μm2) and human (bmax ​= ​10,000 ​s/mm2, or 10 ​ms/μm2) brain. We show that SANDI defines new contrasts representing complementary information on the brain cyto- and myelo-architecture. Indeed, we show maps from 25 healthy human subjects of MR soma and neurite signal fractions, that remarkably mirror contrasts of histological images of brain cyto- and myelo-architecture. Although still under validation, SANDI might provide new insight into tissue architecture by introducing a new set of biomarkers of potential great value for biomedical applications and pure neuroscience.
Interpreting Clifford Geertz : Cultural Investigation in the Social Sciences
\"Meaning is everywhere and everybody must interpret. Nobody argued this more persuasively than Clifford Geertz. From Balinese cock fights to sheep raids to theater states, Geertz showed that there is no escape from the sticky webs of meaning that capture our lives. But what exactly is Geertz saying, and should we still listen to him? After all, many argue that his ideas have run out of steam. This book confronts Geertz and his critics, offering surprising answers from various disciplines and identifying for the first time the contours of \"the Geertz Effect.\"\"--Provided by publisher.
Joint super-resolution and synthesis of 1 mm isotropic MP-RAGE volumes from clinical MRI exams with scans of different orientation, resolution and contrast
•SynthSR turns clinical scans of different resolution and contrast into 1 mm MPRAGEs.•It relies on a CNN trained on fake images synthesized on the fly at every minibatch.•It can be retrained for any combination of resolutions / contrasts without new data.•It enables segmentation, registration, etc with existing software (e.g. FreeSurfer) Code is open source. [Display omitted] Most existing algorithms for automatic 3D morphometry of human brain MRI scans are designed for data with near-isotropic voxels at approximately 1 mm resolution, and frequently have contrast constraints as well-typically requiring T1-weighted images (e.g., MP-RAGE scans). This limitation prevents the analysis of millions of MRI scans acquired with large inter-slice spacing in clinical settings every year. In turn, the inability to quantitatively analyze these scans hinders the adoption of quantitative neuro imaging in healthcare, and also precludes research studies that could attain huge sample sizes and hence greatly improve our understanding of the human brain. Recent advances in convolutional neural networks (CNNs) are producing outstanding results in super-resolution and contrast synthesis of MRI. However, these approaches are very sensitive to the specific combination of contrast, resolution and orientation of the input images, and thus do not generalize to diverse clinical acquisition protocols – even within sites. In this article, we present SynthSR, a method to train a CNN that receives one or more scans with spaced slices, acquired with different contrast, resolution and orientation, and produces an isotropic scan of canonical contrast (typically a 1 mm MP-RAGE). The presented method does not require any preprocessing, beyond rigid coregistration of the input scans. Crucially, SynthSR trains on synthetic input images generated from 3D segmentations, and can thus be used to train CNNs for any combination of contrasts, resolutions and orientations without high-resolution real images of the input contrasts. We test the images generated with SynthSR in an array of common downstream analyses, and show that they can be reliably used for subcortical segmentation and volumetry, image registration (e.g., for tensor-based morphometry), and, if some image quality requirements are met, even cortical thickness morphometry. The source code is publicly available at https://github.com/BBillot/SynthSR.
Predicting Alzheimer's disease progression using deep recurrent neural networks
Early identification of individuals at risk of developing Alzheimer's disease (AD) dementia is important for developing disease-modifying therapies. In this study, given multimodal AD markers and clinical diagnosis of an individual from one or more timepoints, we seek to predict the clinical diagnosis, cognition and ventricular volume of the individual for every month (indefinitely) into the future. We proposed and applied a minimal recurrent neural network (minimalRNN) model to data from The Alzheimer's Disease Prediction Of Longitudinal Evolution (TADPOLE) challenge, comprising longitudinal data of 1677 participants (Marinescu et al., 2018) from the Alzheimer's Disease Neuroimaging Initiative (ADNI). We compared the performance of the minimalRNN model and four baseline algorithms up to 6 years into the future. Most previous work on predicting AD progression ignore the issue of missing data, which is a prevalent issue in longitudinal data. Here, we explored three different strategies to handle missing data. Two of the strategies treated the missing data as a “preprocessing” issue, by imputing the missing data using the previous timepoint (“forward filling”) or linear interpolation (“linear filling). The third strategy utilized the minimalRNN model itself to fill in the missing data both during training and testing (“model filling”). Our analyses suggest that the minimalRNN with “model filling” compared favorably with baseline algorithms, including support vector machine/regression, linear state space (LSS) model, and long short-term memory (LSTM) model. Importantly, although the training procedure utilized longitudinal data, we found that the trained minimalRNN model exhibited similar performance, when using only 1 input timepoint or 4 input timepoints, suggesting that our approach might work well with just cross-sectional data. An earlier version of our approach was ranked 5th (out of 53 entries) in the TADPOLE challenge in 2019. The current approach is ranked 2nd out of 63 entries as of June 3rd, 2020.
Tertiary lymphoid structures (TLS) identification and density assessment on H&E-stained digital slides of lung cancer
Tertiary lymphoid structures (TLS) are ectopic aggregates of lymphoid cells in inflamed, infected, or tumoral tissues that are easily recognized on an H&E histology slide as discrete entities, distinct from lymphocytes. TLS are associated with improved cancer prognosis but there is no standardised method available to quantify their presence. Previous studies have used immunohistochemistry to determine the presence of specific cells as a marker of the TLS. This has now been proven to be an underestimate of the true number of TLS. Thus, we propose a methodology for the automated identification and quantification of TLS, based on H&E slides. We subsequently determined the mathematical criteria defining a TLS. TLS regions were identified through a deep convolutional neural network and segmentation of lymphocytes was performed through an ellipsoidal model. This methodology had a 92.87% specificity at 95% sensitivity, 88.79% specificity at 98% sensitivity and 84.32% specificity at 99% sensitivity level based on 144 TLS annotated H&E slides implying that the automated approach was able to reproduce the histopathologists’ assessment with great accuracy. We showed that the minimum number of lymphocytes within TLS is 45 and the minimum TLS area is 6,245 μm 2 . Furthermore, we have shown that the density of the lymphocytes is more than 3 times those outside of the TLS. The mean density and standard deviation of lymphocytes within a TLS area are 0.0128/ μm 2 and 0.0026/ μm 2 respectively compared to 0.004/ μm 2 and 0.001/ μm 2 in non-TLS regions. The proposed methodology shows great potential for automated identification and quantification of the TLS density on digital H&E slides.
Uncertainty modelling in deep learning for safer neuroimage enhancement: Demonstration in diffusion MRI
•Proposes methods for modelling different types of uncertainty that arise in deep learning (DL) applications for image enhancement problems.•Demonstrates in dMRI super-resolution tasks that modelling uncertainty enhances the safety of DL-based enhancement system by bringing two categories of practical benefits:(1) “performance improvement”: e.g., the generalisation to out-of-distribution data, robustness to noise and outliers (Section 4.3)(2) “reliability assessment of prediction”: e.g., certification of performance based on uncertainty-thresholding (Section 4.4.1); detection of unfamiliar structures and understanding the sources of uncertainty (Section 4.4.2).•Provide a comprehensive set of experiments in a diverse set of datasets, which vary in demographics, scanner types, acquisition protocols or pathology.•The methods are in theory applicable to many other imaging modalities and data enhancement applications.•Codes will be available on Github. Deep learning (DL) has shown great potential in medical image enhancement problems, such as super-resolution or image synthesis. However, to date, most existing approaches are based on deterministic models, neglecting the presence of different sources of uncertainty in such problems. Here we introduce methods to characterise different components of uncertainty, and demonstrate the ideas using diffusion MRI super-resolution. Specifically, we propose to account for intrinsic uncertainty through a heteroscedastic noise model and for parameter uncertainty through approximate Bayesian inference, and integrate the two to quantify predictive uncertainty over the output image. Moreover, we introduce a method to propagate the predictive uncertainty on a multi-channelled image to derived scalar parameters, and separately quantify the effects of intrinsic and parameter uncertainty therein. The methods are evaluated for super-resolution of two different signal representations of diffusion MR images—Diffusion Tensor images and Mean Apparent Propagator MRI—and their derived quantities such as mean diffusivity and fractional anisotropy, on multiple datasets of both healthy and pathological human brains. Results highlight three key potential benefits of modelling uncertainty for improving the safety of DL-based image enhancement systems. Firstly, modelling uncertainty improves the predictive performance even when test data departs from training data (“out-of-distribution” datasets). Secondly, the predictive uncertainty highly correlates with reconstruction errors, and is therefore capable of detecting predictive “failures”. Results on both healthy subjects and patients with brain glioma or multiple sclerosis demonstrate that such an uncertainty measure enables subject-specific and voxel-wise risk assessment of the super-resolved images that can be accounted for in subsequent analysis. Thirdly, we show that the method for decomposing predictive uncertainty into its independent sources provides high-level “explanations” for the model performance by separately quantifying how much uncertainty arises from the inherent difficulty of the task or the limited training examples. The introduced concepts of uncertainty modelling extend naturally to many other imaging modalities and data enhancement applications.
Probabilistic disease progression modeling to characterize diagnostic uncertainty: Application to staging and prediction in Alzheimer's disease
Disease progression modeling (DPM) of Alzheimer's disease (AD) aims at revealing long term pathological trajectories from short term clinical data. Along with the ability of providing a data-driven description of the natural evolution of the pathology, DPM has the potential of representing a valuable clinical instrument for automatic diagnosis, by explicitly describing the biomarker transition from normal to pathological stages along the disease time axis. In this work we reformulated DPM within a probabilistic setting to quantify the diagnostic uncertainty of individual disease severity in an hypothetical clinical scenario, with respect to missing measurements, biomarkers, and follow-up information. We show that the staging provided by the model on 582 amyloid positive testing individuals has high face validity with respect to the clinical diagnosis. Using follow-up measurements largely reduces the prediction uncertainties, while the transition from normal to pathological stages is mostly associated with the increase of brain hypo-metabolism, temporal atrophy, and worsening of clinical scores. The proposed formulation of DPM provides a statistical reference for the accurate probabilistic assessment of the pathological stage of de-novo individuals, and represents a valuable instrument for quantifying the variability and the diagnostic value of biomarkers across disease stages.
Identifying multiple sclerosis subtypes using unsupervised machine learning and MRI data
Multiple sclerosis (MS) can be divided into four phenotypes based on clinical evolution. The pathophysiological boundaries of these phenotypes are unclear, limiting treatment stratification. Machine learning can identify groups with similar features using multidimensional data. Here, to classify MS subtypes based on pathological features, we apply unsupervised machine learning to brain MRI scans acquired in previously published studies. We use a training dataset from 6322 MS patients to define MRI-based subtypes and an independent cohort of 3068 patients for validation. Based on the earliest abnormalities, we define MS subtypes as cortex-led, normal-appearing white matter-led, and lesion-led. People with the lesion-led subtype have the highest risk of confirmed disability progression (CDP) and the highest relapse rate. People with the lesion-led MS subtype show positive treatment response in selected clinical trials. Our findings suggest that MRI-based subtypes predict MS disability progression and response to treatment and may be used to define groups of patients in interventional trials. Multiple sclerosis is a heterogeneous progressive disease. Here, the authors use an unsupervised machine learning algorithm to determine multiple sclerosis subtypes, progression, and response to potential therapeutic treatments based on neuroimaging data.
Image processing and Quality Control for the first 10,000 brain imaging datasets from UK Biobank
UK Biobank is a large-scale prospective epidemiological study with all data accessible to researchers worldwide. It is currently in the process of bringing back 100,000 of the original participants for brain, heart and body MRI, carotid ultrasound and low-dose bone/fat x-ray. The brain imaging component covers 6 modalities (T1, T2 FLAIR, susceptibility weighted MRI, Resting fMRI, Task fMRI and Diffusion MRI). Raw and processed data from the first 10,000 imaged subjects has recently been released for general research access. To help convert this data into useful summary information we have developed an automated processing and QC (Quality Control) pipeline that is available for use by other researchers. In this paper we describe the pipeline in detail, following a brief overview of UK Biobank brain imaging and the acquisition protocol. We also describe several quantitative investigations carried out as part of the development of both the imaging protocol and the processing pipeline.