Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
2,395
result(s) for
"Neuroimaging - standards"
Sort by:
Amygdalar nuclei and hippocampal subfields on MRI: Test-retest reliability of automated volumetry across different MRI sites and vendors
by
Richardson, Jill C.
,
Marizzoni, Moira
,
Picco, Agnese
in
[SDV.IB.IMA]Life Sciences [q-bio]/Bioengineering/Imaging
,
Adult
,
Aged
2020
The amygdala and the hippocampus are two limbic structures that play a critical role in cognition and behavior, however their manual segmentation and that of their smaller nuclei/subfields in multicenter datasets is time consuming and difficult due to the low contrast of standard MRI. Here, we assessed the reliability of the automated segmentation of amygdalar nuclei and hippocampal subfields across sites and vendors using FreeSurfer in two independent cohorts of older and younger healthy adults.
Sixty-five healthy older (cohort 1) and 68 younger subjects (cohort 2), from the PharmaCog and CoRR consortia, underwent repeated 3D-T1 MRI (interval 1–90 days). Segmentation was performed using FreeSurfer v6.0. Reliability was assessed using volume reproducibility error (ε) and spatial overlapping coefficient (DICE) between test and retest session.
Significant MRI site and vendor effects (p < .05) were found in a few subfields/nuclei for the ε, while extensive effects were found for the DICE score of most subfields/nuclei. Reliability was strongly influenced by volume, as ε correlated negatively and DICE correlated positively with volume size of structures (absolute value of Spearman’s r correlations >0.43, p < 1.39E-36). In particular, volumes larger than 200 mm3 (for amygdalar nuclei) and 300 mm3 (for hippocampal subfields, except for molecular layer) had the best test-retest reproducibility (ε < 5% and DICE > 0.80).
Our results support the use of volumetric measures of larger amygdalar nuclei and hippocampal subfields in multisite MRI studies. These measures could be useful for disease tracking and assessment of efficacy in drug trials.
•Differences in MRI site/vendor had a limited effect on volume reproducibility.•Differences in MRI site/vendor had an extensive effect on spatial accuracy.•Reliability is good for larger amygdalar and hippocampal structures.•Automated volumetry is reliable in multicenter MRI studies.
Journal Article
Multi-site harmonization of 7 tesla MRI neuroimaging protocols
by
Rua, Catarina
,
Asghar, Michael
,
Clare, Stuart
in
7 tesla
,
Anatomical
,
Brain - diagnostic imaging
2020
Increasing numbers of 7 T (7 T) magnetic resonance imaging (MRI) scanners are in research and clinical use. 7 T MRI can increase the scanning speed, spatial resolution and contrast-to-noise-ratio of many neuroimaging protocols, but technical challenges in implementation have been addressed in a variety of ways across sites. In order to facilitate multi-centre studies and ensure consistency of findings across sites, it is desirable that 7 T MRI sites implement common high-quality neuroimaging protocols that can accommodate different scanner models and software versions.
With the installation of several new 7 T MRI scanners in the United Kingdom, the UK7T Network was established with an aim to create a set of harmonized structural and functional neuroimaging sequences and protocols. The Network currently includes five sites, which use three different scanner platforms, provided by two different vendors.
Here we describe the harmonization of functional and anatomical imaging protocols across the three different scanner models, detailing the necessary changes to pulse sequences and reconstruction methods. The harmonized sequences are fully described, along with implementation details. Example datasets acquired from the same subject on all Network scanners are made available. Based on these data, an evaluation of the harmonization is provided. In addition, the implementation and validation of a common system calibration process is described.
•Harmonised neuroimaging is established on 7 tesla MRI scanners at five sites.•Efficacy of harmonisation is demonstrated with scans on one subject at all sites.•Common calibration protocols achieve better standardisation than vendor’s own.•Protocols and data are available online for all current 7 tesla scanner models.
Journal Article
Denoising scanner effects from multimodal MRI data using linked independent component analysis
by
Smith, Stephen M.
,
Li, Huanjie
,
Nickerson, Lisa D.
in
Adult
,
Brain - diagnostic imaging
,
Brain research
2020
Pooling magnetic resonance imaging (MRI) data across research studies, or utilizing shared data from imaging repositories, presents exceptional opportunities to advance and enhance reproducibility of neuroscience research. However, scanner confounds hinder pooling data collected on different scanners or across software and hardware upgrades on the same scanner, even when all acquisition protocols are harmonized. These confounds reduce power and can lead to spurious findings. Unfortunately, methods to address this problem are scant. In this study, we propose a novel denoising approach that implements a data-driven linked independent component analysis (LICA) to identify scanner-related effects for removal from multimodal MRI to denoise scanner effects. We utilized multi-study data to test our proposed method that were collected on a single 3T scanner, pre- and post-software and major hardware upgrades and using different acquisition parameters. Our proposed denoising method shows a greater reduction of scanner-related variance compared with standard GLM confound regression or ICA-based single-modality denoising. Although we did not test it here, for combining data across different scanners, LICA should prove even better at identifying scanner effects as between-scanner variability is generally much larger than within-scanner variability. Our method has great promise for denoising scanner effects in multi-study and in large-scale multi-site studies that may be confounded by scanner differences.
Journal Article
Validation of a combined image derived input function and venous sampling approach for the quantification of 18FGE-179 PET binding in the brain
2021
Blood-based kinetic analysis of PET data relies on an accurate estimate of the arterial plasma input function (PIF). An alternative to invasive measurements from arterial sampling is an image-derived input function (IDIF). However, an IDIF provides the whole blood radioactivity concentration, rather than the required free tracer radioactivity concentration in plasma. To estimate the tracer PIF, we corrected an IDIF from the carotid artery with estimates of plasma parent fraction (PF) and plasma-to-whole blood (PWB) ratio obtained from five venous samples. We compared the combined IDIF+venous approach to gold standard data from arterial sampling in 10 healthy volunteers undergoing [18F]GE-179 brain PET imaging of the NMDA receptor. Arterial and venous PF and PWB ratio estimates determined from 7 patients with traumatic brain injury (TBI) were also compared to assess the potential effect of medication. There was high agreement between areas under the curves of the estimates of PF (r = 0.99, p<0.001), PWB ratio (r = 0.93, p<0.001), and the PIF (r = 0.92, p<0.001) as well as total distribution volume (VT) in 11 regions across the brain (r = 0.95, p<0.001). IDIF+venous VT had a mean bias of −1.7% and a comparable regional coefficient of variation (arterial: 21.3 ± 2.5%, IDIF+venous: 21.5 ± 2.0%). Simplification of the IDIF+venous method to use only one venous sample provided less accurate VT estimates (mean bias 9.9%; r = 0.71, p<0.001). A version of the method that avoids the need for blood sampling by combining the IDIF with population-based PF and PWB ratio estimates systematically underestimated VT (mean bias −20.9%), and produced VT estimates with a poor correlation to those obtained using arterial data (r = 0.45, p<0.001). Arterial and venous blood data from 7 TBI patients showed high correlations for PF (r = 0.92, p = 0.003) and PWB ratio (r = 0.93, p = 0.003). In conclusion, the IDIF+venous method with five venous samples provides a viable alternative to arterial sampling for quantification of [18F]GE-179 VT.
Journal Article
A multimodal vision transformer for interpretable fusion of functional and structural neuroimaging data
2024
Multimodal neuroimaging is an emerging field that leverages multiple sources of information to diagnose specific brain disorders, especially when deep learning‐based AI algorithms are applied. The successful combination of different brain imaging modalities using deep learning remains a challenging yet crucial research topic. The integration of structural and functional modalities is particularly important for the diagnosis of various brain disorders, where structural information plays a crucial role in diseases such as Alzheimer's, while functional imaging is more critical for disorders such as schizophrenia. However, the combination of functional and structural imaging modalities can provide a more comprehensive diagnosis. In this work, we present MultiViT, a novel diagnostic deep learning model that utilizes vision transformers and cross‐attention mechanisms to effectively fuse information from 3D gray matter maps derived from structural MRI with functional network connectivity matrices obtained from functional MRI using the ICA algorithm. MultiViT achieves an AUC of 0.833, outperforming both our unimodal and multimodal baselines, enabling more accurate classification and diagnosis of schizophrenia. In addition, using vision transformer's unique attentional maps in combination with cross‐attentional mechanisms and brain function information, we identify critical brain regions in 3D gray matter space associated with the characteristics of schizophrenia. Our research not only significantly improves the accuracy of AI‐based automated imaging diagnostics for schizophrenia, but also pioneers a rational and advanced data fusion approach by replacing complex, high‐dimensional fMRI information with functional network connectivity, integrating it with representative structural data from 3D gray matter images, and further providing interpretative biomarker localization in a 3D structural space.
The MultiViT model combines structural and functional neuroimaging data for the prediction of schizophrenia and integrates vision transformers with cross‐attention layers in order to preserve mutual information. The pipeline generates highly interpretable cross‐attention‐based brain saliency maps and emphasizes functional network connectivity patterns related to the disorder.
Journal Article
Minimal specifications for non-human primate MRI: Challenges in standardizing and harmonizing data collection
by
Fair, Damien A.
,
Menon, Ravi S.
,
Glasser, Matthew F.
in
Algorithms
,
Animal cognition
,
Animals
2021
•Non-human primate MRI standardization.•Poor reproducibility in non-human primate resting-state fMRI.•Guidelines enable improved and more reproducible MRI measures.•Convergence between non-human primate and human neuroimaging strategies.
Recent methodological advances in MRI have enabled substantial growth in neuroimaging studies of non-human primates (NHPs), while open data-sharing through the PRIME-DE initiative has increased the availability of NHP MRI data and the need for robust multi-subject multi-center analyses. Streamlined acquisition and analysis protocols would accelerate and improve these efforts. However, consensus on minimal standards for data acquisition protocols and analysis pipelines for NHP imaging remains to be established, particularly for multi-center studies. Here, we draw parallels between NHP and human neuroimaging and provide minimal guidelines for harmonizing and standardizing data acquisition. We advocate robust translation of widely used open-access toolkits that are well established for analyzing human data. We also encourage the use of validated, automated pre-processing tools for analyzing NHP data sets. These guidelines aim to refine methodological and analytical strategies for small and large-scale NHP neuroimaging data. This will improve reproducibility of results, and accelerate the convergence between NHP and human neuroimaging strategies which will ultimately benefit fundamental and translational brain science.
Journal Article
Strengths and challenges of longitudinal non-human primate neuroimaging
2021
•Strengths and challenges of longitudinal non-human primate MRI are described.•Statistical power calculation of longitudinal and cross-sectional designs are provided.•The impact of template choice on grey matter estimation is demonstrated.•Recommendations for designing and analysing such studies are provided.
Longitudinal non-human primate neuroimaging has the potential to greatly enhance our understanding of primate brain structure and function. Here we describe its specific strengths, compared to both cross-sectional non-human primate neuroimaging and longitudinal human neuroimaging, but also its associated challenges. We elaborate on factors guiding the use of different analytical tools, subject-specific versus age-specific templates for analyses, and issues related to statistical power.
Journal Article
Robust regression for large-scale neuroimaging studies
by
Fritsch, Virgile
,
Poline, Jean-Baptiste
,
Nees, Frauke
in
Behavior
,
Computer Simulation
,
Data Interpretation, Statistical
2015
Multi-subject datasets used in neuroimaging group studies have a complex structure, as they exhibit non-stationary statistical properties across regions and display various artifacts.
While studies with small sample sizes can rarely be shown to deviate from standard hypotheses (such as the normality of the residuals) due to the poor sensitivity of normality tests with low degrees of freedom, large-scale studies (e.g. >100 subjects) exhibit more obvious deviations from these hypotheses and call for more refined models for statistical inference. Here, we demonstrate the benefits of robust regression as a tool for analyzing large neuroimaging cohorts. First, we use an analytic test based on robust parameter estimates; based on simulations, this procedure is shown to provide an accurate statistical control without resorting to permutations. Second, we show that robust regression yields more detections than standard algorithms using as an example an imaging genetics study with 392 subjects. Third, we show that robust regression can avoid false positives in a large-scale analysis of brain–behavior relationships with over 1500 subjects. Finally we embed robust regression in the Randomized Parcellation Based Inference (RPBI) method and demonstrate that this combination further improves the sensitivity of tests carried out across the whole brain. Altogether, our results show that robust procedures provide important advantages in large-scale neuroimaging group studies.
•Demonstrate benefits of robust regression for the analysis of large neuroimaging cohorts•Use of an analytic testing framework•Embed robust regression in more complex analysis methods•Application to neuroimaging-genetic studies
Journal Article
Scanning the horizon: towards transparent and reproducible neuroimaging research
2017
Key Points
There is growing concern about the reproducibility of scientific research, and neuroimaging research suffers from many features that are thought to lead to high levels of false results.
Statistical power of neuroimaging studies has increased over time but remains relatively low, especially for group comparison studies. An analysis of effect sizes in the Human Connectome Project demonstrates that most functional MRI studies are not sufficiently powered to find reasonable effect sizes.
Neuroimaging analysis has a high degree of flexibility in analysis methods, which can lead to inflated false-positive rates unless controlled for. Pre-registration of analysis plans and clear delineation of hypothesis-driven and exploratory research are potential solutions to this problem.
The use of appropriate corrections for multiple tests has increased, but some common methods can have highly inflated false-positive rates. The use of non-parametric methods is encouraged to provide accurate correction for multiple tests.
Software errors have the potential to lead to incorrect or irreproducible results. The adoption of improved software engineering methods and software testing strategies can help to reduce such problems.
Reproducibility will be improved through greater transparency in methods reporting and through increased sharing of data and code.
Neuroimaging techniques are increasingly applied by the wider neuroscience community. However, problems such as low statistical power, flexibility in data analysis and software issues pose challenges to interpreting neuroimaging data in a meaningful and reliable way. Here, Poldrack
et al
. discuss these and other problems, and suggest solutions.
Functional neuroimaging techniques have transformed our ability to probe the neurobiological basis of behaviour and are increasingly being applied by the wider neuroscience community. However, concerns have recently been raised that the conclusions that are drawn from some human neuroimaging studies are either spurious or not generalizable. Problems such as low statistical power, flexibility in data analysis, software errors and a lack of direct replication apply to many fields, but perhaps particularly to functional MRI. Here, we discuss these problems, outline current and suggested best practices, and describe how we think the field should evolve to produce the most meaningful and reliable answers to neuroscientific questions.
Journal Article
Image processing and Quality Control for the first 10,000 brain imaging datasets from UK Biobank
by
Zhang, Hui
,
Miller, Karla L.
,
Hernandez-Fernandez, Moises
in
Alzheimer's disease
,
Automation
,
Big data imaging
2018
UK Biobank is a large-scale prospective epidemiological study with all data accessible to researchers worldwide. It is currently in the process of bringing back 100,000 of the original participants for brain, heart and body MRI, carotid ultrasound and low-dose bone/fat x-ray. The brain imaging component covers 6 modalities (T1, T2 FLAIR, susceptibility weighted MRI, Resting fMRI, Task fMRI and Diffusion MRI). Raw and processed data from the first 10,000 imaged subjects has recently been released for general research access. To help convert this data into useful summary information we have developed an automated processing and QC (Quality Control) pipeline that is available for use by other researchers. In this paper we describe the pipeline in detail, following a brief overview of UK Biobank brain imaging and the acquisition protocol. We also describe several quantitative investigations carried out as part of the development of both the imaging protocol and the processing pipeline.
Journal Article