Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
239
result(s) for
"Jenkinson, Mark"
Sort by:
Deep learning-based unlearning of dataset bias for MRI harmonisation and confound removal
by
Dinsdale, Nicola K.
,
Jenkinson, Mark
,
Namburete, Ana I.L.
in
Adaptation
,
Bias
,
Brain - physiology
2021
•We demonstrate a flexible deep-learning-based harmonisation framework.•Applied to age prediction and segmentation tasks in a range of datasets.•Scanner information is removed, maintaining performance and improving generalisability.•The framework can be used with any feedforward network architecture.•It successfully removes additional confounds and works with varied distributions.
Increasingly large MRI neuroimaging datasets are becoming available, including many highly multi-site multi-scanner datasets. Combining the data from the different scanners is vital for increased statistical power; however, this leads to an increase in variance due to nonbiological factors such as the differences in acquisition protocols and hardware, which can mask signals of interest.
We propose a deep learning based training scheme, inspired by domain adaptation techniques, which uses an iterative update approach to aim to create scanner-invariant features while simultaneously maintaining performance on the main task of interest, thus reducing the influence of scanner on network predictions. We demonstrate the framework for regression, classification and segmentation tasks with two different network architectures.
We show that not only can the framework harmonise many-site datasets but it can also adapt to many data scenarios, including biased datasets and limited training labels. Finally, we show that the framework can be extended for the removal of other known confounds in addition to scanner. The overall framework is therefore flexible and should be applicable to a wide range of neuroimaging studies.
Journal Article
Learning patterns of the ageing brain in MRI using deep convolutional networks
2021
•Brain age is estimated using a 3D CNN from 12,802 full T1-weighted images.•Regions used to drive predictions are different for linearly and nonlinearly registered data.•Linear registrations utilise a greater diversity of biologically meaningful areas.•Correlations with IDPs and non-imaging variables are consistent with other publications.•Excluding subjects with various health conditions had minimal impact on main correlations.
Both normal ageing and neurodegenerative diseases cause morphological changes to the brain. Age-related brain changes are subtle, nonlinear, and spatially and temporally heterogenous, both within a subject and across a population. Machine learning models are particularly suited to capture these patterns and can produce a model that is sensitive to changes of interest, despite the large variety in healthy brain appearance. In this paper, the power of convolutional neural networks (CNNs) and the rich UK Biobank dataset, the largest database currently available, are harnessed to address the problem of predicting brain age. We developed a 3D CNN architecture to predict chronological age, using a training dataset of 12,802 T1-weighted MRI images and a further 6,885 images for testing. The proposed method shows competitive performance on age prediction, but, most importantly, the CNN prediction errors ΔBrainAge=AgePredicted−AgeTrue correlated significantly with many clinical measurements from the UK Biobank in the female and male groups. In addition, having used images from only one imaging modality in this experiment, we examined the relationship between ΔBrainAge and the image-derived phenotypes (IDPs) from all other imaging modalities in the UK Biobank, showing correlations consistent with known patterns of ageing. Furthermore, we show that the use of nonlinearly registered images to train CNNs can lead to the network being driven by artefacts of the registration process and missing subtle indicators of ageing, limiting the clinical relevance. Due to the longitudinal aspect of the UK Biobank study, in the future it will be possible to explore whether the ΔBrainAge from models such as this network were predictive of any health outcomes.
Journal Article
BIANCA (Brain Intensity AbNormality Classification Algorithm): A new tool for automated segmentation of white matter hyperintensities
2016
Reliable quantification of white matter hyperintensities of presumed vascular origin (WMHs) is increasingly needed, given the presence of these MRI findings in patients with several neurological and vascular disorders, as well as in elderly healthy subjects.
We present BIANCA (Brain Intensity AbNormality Classification Algorithm), a fully automated, supervised method for WMH detection, based on the k-nearest neighbour (k-NN) algorithm. Relative to previous k-NN based segmentation methods, BIANCA offers different options for weighting the spatial information, local spatial intensity averaging, and different options for the choice of the number and location of the training points. BIANCA is multimodal and highly flexible so that the user can adapt the tool to their protocol and specific needs.
We optimised and validated BIANCA on two datasets with different MRI protocols and patient populations (a “predominantly neurodegenerative” and a “predominantly vascular” cohort).
BIANCA was first optimised on a subset of images for each dataset in terms of overlap and volumetric agreement with a manually segmented WMH mask. The correlation between the volumes extracted with BIANCA (using the optimised set of options), the volumes extracted from the manual masks and visual ratings showed that BIANCA is a valid alternative to manual segmentation. The optimised set of options was then applied to the whole cohorts and the resulting WMH volume estimates showed good correlations with visual ratings and with age. Finally, we performed a reproducibility test, to evaluate the robustness of BIANCA, and compared BIANCA performance against existing methods.
Our findings suggest that BIANCA, which will be freely available as part of the FSL package, is a reliable method for automated WMH segmentation in large cross-sectional cohort studies.
•BIANCA is a new tool for automated segmentation of white matter hyperintensities.•BIANCA is multimodal, flexible, computationally lean, robust, freely available.•We optimised and validated BIANCA on two different MRI protocols and populations.•WMH volumes derived with BIANCA showed good correlations with visual ratings and age.•BIANCA is promising for application in large cross-sectional cohort studies.
Journal Article
Quantitative assessment of the susceptibility artefact and its interaction with motion in diffusion MRI
by
Zhang, Hui
,
Drobnjak, Ivana
,
Graham, Mark S.
in
Analysis
,
Artefacts
,
Biology and Life Sciences
2017
In this paper we evaluate the three main methods for correcting the susceptibility-induced artefact in diffusion-weighted magnetic-resonance (DW-MR) data, and assess how correction is affected by the susceptibility field's interaction with motion. The susceptibility artefact adversely impacts analysis performed on the data and is typically corrected in post-processing. Correction strategies involve either registration to a structural image, the application of an acquired field-map or the use of additional images acquired with different phase-encoding. Unfortunately, the choice of which method to use is made difficult by the absence of any systematic comparisons of them. In this work we quantitatively evaluate these methods, by extending and employing a recently proposed framework that allows for the simulation of realistic DW-MR datasets with artefacts. Our analysis separately evaluates the ability for methods to correct for geometric distortions and to recover lost information in regions of signal compression. In terms of geometric distortions, we find that registration-based methods offer the poorest correction. Field-mapping techniques are better, but are influenced by noise and partial volume effects, whilst multiple phase-encode methods performed best. We use our simulations to validate a popular surrogate metric of correction quality, the comparison of corrected data acquired with AP and LR phase-encoding, and apply this surrogate to real datasets. Furthermore, we demonstrate that failing to account for the interaction of the susceptibility field with head movement leads to increased errors when analysing DW-MR data. None of the commonly used post-processing methods account for this interaction, and we suggest this may be a valuable area for future methods development.
Journal Article
Image processing and Quality Control for the first 10,000 brain imaging datasets from UK Biobank
by
Zhang, Hui
,
Miller, Karla L.
,
Hernandez-Fernandez, Moises
in
Alzheimer's disease
,
Automation
,
Big data imaging
2018
UK Biobank is a large-scale prospective epidemiological study with all data accessible to researchers worldwide. It is currently in the process of bringing back 100,000 of the original participants for brain, heart and body MRI, carotid ultrasound and low-dose bone/fat x-ray. The brain imaging component covers 6 modalities (T1, T2 FLAIR, susceptibility weighted MRI, Resting fMRI, Task fMRI and Diffusion MRI). Raw and processed data from the first 10,000 imaged subjects has recently been released for general research access. To help convert this data into useful summary information we have developed an automated processing and QC (Quality Control) pipeline that is available for use by other researchers. In this paper we describe the pipeline in detail, following a brief overview of UK Biobank brain imaging and the acquisition protocol. We also describe several quantitative investigations carried out as part of the development of both the imaging protocol and the processing pipeline.
Journal Article
Classification and characterization of periventricular and deep white matter hyperintensities on MRI: A study in older adults
2018
White matter hyperintensities (WMH) are frequently divided into periventricular (PWMH) and deep (DWMH), and the two classes have been associated with different cognitive, microstructural, and clinical correlates. However, although this distinction is widely used in visual ratings scales, how to best anatomically define the two classes is still disputed. In fact, the methods used to define PWMH and DWMH vary significantly between studies, making results difficult to compare. The purpose of this study was twofold: first, to compare four current criteria used to define PWMH and DWMH in a cohort of healthy older adults (mean age: 69.58 ± 5.33 years) by quantifying possible differences in terms of estimated volumes; second, to explore associations between the two WMH sub-classes with cognition, tissue microstructure and cardiovascular risk factors, analysing the impact of different criteria on the specific associations. Our results suggest that the classification criterion used for the definition of PWMH and DWMH should not be considered a major obstacle for the comparison of different studies. We observed that higher PWMH load is associated with reduced cognitive function, higher mean arterial pressure and age. Higher DWMH load is associated with higher body mass index. PWMH have lower fractional anisotropy than DWMH, which also have more heterogeneous microstructure. These findings support the hypothesis that PWMH and DWMH are different entities and that their distinction can provide useful information about healthy and pathological aging processes.
•Classification criteria for periventricular/deep white matter hyperintensities are compared.•The definition of PWMH and DWMH is not a major obstacle for study comparison.•PWMH and DWMH have different functional, microstructural and clinical correlates.•10mm distance rule gave best separation in terms of associations with the tested factors.
Journal Article
The minimal preprocessing pipelines for the Human Connectome Project
by
Andersson, Jesper L.
,
Wilson, J. Anthony
,
Sotiropoulos, Stamatios N.
in
Acquisitions & mergers
,
Algorithms
,
Brain
2013
The Human Connectome Project (HCP) faces the challenging task of bringing multiple magnetic resonance imaging (MRI) modalities together in a common automated preprocessing framework across a large cohort of subjects. The MRI data acquired by the HCP differ in many ways from data acquired on conventional 3Tesla scanners and often require newly developed preprocessing methods. We describe the minimal preprocessing pipelines for structural, functional, and diffusion MRI that were developed by the HCP to accomplish many low level tasks, including spatial artifact/distortion removal, surface generation, cross-modal registration, and alignment to standard space. These pipelines are specially designed to capitalize on the high quality data offered by the HCP. The final standard space makes use of a recently introduced CIFTI file format and the associated grayordinate spatial coordinate system. This allows for combined cortical surface and subcortical volume analyses while reducing the storage and processing requirements for high spatial and temporal resolution data. Here, we provide the minimum image acquisition requirements for the HCP minimal preprocessing pipelines and additional advice for investigators interested in replicating the HCP's acquisition protocols or using these pipelines. Finally, we discuss some potential future improvements to the pipelines.
•Multi-modal preprocessing pipelines for the Human Connectome Project•Description of CIFTI file format and grayordinate coordinate system•Combined surface and volume neuroimaging analysis
Journal Article
The Human Connectome Project's neuroimaging approach
by
Andersson, Jesper L R
,
Moeller, Steen
,
Robinson, Emma C
in
59/57
,
631/378/2649/1594
,
631/378/3917
2016
This paper describes an integrated approach for neuroimaging data acquisition, analysis and sharing. Building on methodological advances from the Human Connectome Project (HCP) and elsewhere, the HCP-style paradigm applies to new and existing data sets that meet core requirements and may accelerate progress in understanding the brain in health and disease.
Noninvasive human neuroimaging has yielded many discoveries about the brain. Numerous methodological advances have also occurred, though inertia has slowed their adoption. This paper presents an integrated approach to data acquisition, analysis and sharing that builds upon recent advances, particularly from the Human Connectome Project (HCP). The 'HCP-style' paradigm has seven core tenets: (i) collect multimodal imaging data from many subjects; (ii) acquire data at high spatial and temporal resolution; (iii) preprocess data to minimize distortions, blurring and temporal artifacts; (iv) represent data using the natural geometry of cortical and subcortical structures; (v) accurately align corresponding brain areas across subjects and studies; (vi) analyze data using neurobiologically accurate brain parcellations; and (vii) share published data via user-friendly databases. We illustrate the HCP-style paradigm using existing HCP data sets and provide guidance for future research. Widespread adoption of this paradigm should accelerate progress in understanding the brain in health and disease.
Journal Article
Temporally-independent functional modes of spontaneous brain activity
by
Moeller, Steen
,
Yacoub, Essa S
,
Beckmann, Christian F
in
Adult
,
Auditory cortex
,
Biological Sciences
2012
Resting-state functional magnetic resonance imaging has become a powerful tool for the study of functional networks in the brain. Even \"at rest,\" the brain's different functional networks spontaneously fluctuate in their activity level; each network's spatial extent can therefore be mapped by finding temporal correlations between its different subregions. Current correlation-based approaches measure the average functional connectivity between regions, but this average is less meaningful for regions that are part of multiple networks; one ideally wants a network model that explicitly allows overlap, for example, allowing a region's activity pattern to reflect one network's activity some of the time, and another network's activity at other times. However, even those approaches that do allow overlap have often maximized mutual spatial independence, which may be suboptimal if distinct networks have significant overlap. In this work, we identify functionally distinct networks by virtue of their temporal independence, taking advantage of the additional temporal richness available via improvements in functional magnetic resonance imaging sampling rate. We identify multiple \"temporal functional modes,\" including several that subdivide the default-mode network (and the regions anticorrelated with it) into several functionally distinct, spatially overlapping, networks, each with its own pattern of correlations and anticorrelations. These functionally distinct modes of spontaneous brain activity are, in general, quite different from resting-state networks previously reported, and may have greater biological interpretability.
Journal Article
The developing Human Connectome Project (dHCP) automated resting-state functional processing framework for newborn infants
by
O'Muircheartaigh, Jonathan
,
Baxter, Luke
,
Makropoulos, Antonios
in
Automation
,
Brain - diagnostic imaging
,
Brain - physiology
2020
•An automated and robust pipeline to minimally pre-process highly confounded neonatal fMRI data.•Includes integrated dynamic distortion and slice-to-volume motion correction.•A robust multimodal registration approach which includes custom neonatal templates.•Incorporates an automated and self-reporting QC framework to quantify data quality and identify issues for further inspection.•Data analysis of 538 infants imaged at 26–45 weeks post-menstrual age.
The developing Human Connectome Project (dHCP) aims to create a detailed 4-dimensional connectome of early life spanning 20–45 weeks post-menstrual age. This is being achieved through the acquisition of multi-modal MRI data from over 1000 in- and ex-utero subjects combined with the development of optimised pre-processing pipelines. In this paper we present an automated and robust pipeline to minimally pre-process highly confounded neonatal resting-state fMRI data, robustly, with low failure rates and high quality-assurance. The pipeline has been designed to specifically address the challenges that neonatal data presents including low and variable contrast and high levels of head motion. We provide a detailed description and evaluation of the pipeline which includes integrated slice-to-volume motion correction and dynamic susceptibility distortion correction, a robust multimodal registration approach, bespoke ICA-based denoising, and an automated QC framework. We assess these components on a large cohort of dHCP subjects and demonstrate that processing refinements integrated into the pipeline provide substantial reduction in movement related distortions, resulting in significant improvements in SNR, and detection of high quality RSNs from neonates.
Journal Article