Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
28 result(s) for "Namburete, Ana I. L."
Sort by:
Subcortical segmentation of the fetal brain in 3D ultrasound using deep learning
•Subcortical segmentation is performed in 3D fetal brain US with a 3D CNN.•High performance can be achieved using only nine manually annotated US volumes.•Pre-alignment increases segmentation performance but is not essential.•Subcortical growth curves during the second trimester of gestation are presented.•The cerebellar volume trajectories are in line with previous publications [Display omitted] The quantification of subcortical volume development from 3D fetal ultrasound can provide important diagnostic information during pregnancy monitoring. However, manual segmentation of subcortical structures in ultrasound volumes is time-consuming and challenging due to low soft tissue contrast, speckle and shadowing artifacts. For this reason, we developed a convolutional neural network (CNN) for the automated segmentation of the choroid plexus (CP), lateral posterior ventricle horns (LPVH), cavum septum pellucidum et vergae (CSPV), and cerebellum (CB) from 3D ultrasound. As ground-truth labels are scarce and expensive to obtain, we applied few-shot learning, in which only a small number of manual annotations (n = 9) are used to train a CNN. We compared training a CNN with only a few individually annotated volumes versus many weakly labelled volumes obtained from atlas-based segmentations. This showed that segmentation performance close to intra-observer variability can be obtained with only a handful of manual annotations. Finally, the trained models were applied to a large number (n = 278) of ultrasound image volumes of a diverse, healthy population, obtaining novel US-specific growth curves of the respective structures during the second trimester of gestation.
Computational methods for quantifying in vivo muscle fascicle curvature from ultrasound images
Muscle fascicles curve during contraction, and this has been seen using B-mode ultrasound. Curvature can vary along a fascicle, and amongst the fascicles within a muscle. The purpose of this study was to develop an automated method for quantifying curvature across the entirety of an imaged muscle, to test the accuracy of the method against synthetic images of known curvature and noise, and to test the sensitivity of the method to ultrasound probe placement. Both synthetic and ultrasound images were processed using multiscale vessel enhancement filtering to accentuate the muscle fascicles, wavelet-based methods were used to quantify fascicle orientations and curvature distribution grids were produced by quantifying local curvatures for each point within the image. Ultrasound images of ramped isometric contractions of the human medial gastrocnemius were acquired in a test–retest study. The methods enabled distinct curvatures to be determined in different regions of the muscle. The methods were sensitive to kernel sizes during image processing, noise within the image and the variability of probe placements during retesting. Across the physiological range of curvatures and noise, curvatures calculated from validation grids were quantified with a typical standard error of less than 0.026 m −1, and this is about 1% of the maximum curvatures observed in fascicles of contracting muscle.
BEAN: Brain Extraction and Alignment Network for 3D Fetal Neurosonography
•Automated extraction and alignment of the fetal brain from 3D ultrasound scans.•Fast, age-independent, and consistent performance for both tasks.•Extremely flexible, requires minimal pre-processing.•Modular design allows BEAN to be adapted for different applications. [Display omitted] Brain extraction (masking of extra-cerebral tissues) and alignment are fundamental first steps of most neuroimage analysis pipelines. The lack of automated solutions for 3D ultrasound (US) has therefore limited its potential as a neuroimaging modality for studying fetal brain development using routinely acquired scans. In this work, we propose a convolutional neural network (CNN) that accurately and consistently aligns and extracts the fetal brain from minimally pre-processed 3D US scans. Our multi-task CNN, Brain Extraction and Alignment Network (BEAN), consists of two independent branches: 1) a fully-convolutional encoder-decoder branch for brain extraction of unaligned scans, and 2) a two-step regression-based branch for similarity alignment of the brain to a common coordinate space. BEAN was tested on 356 fetal head 3D scans spanning the gestational range of 14 to 30 weeks, significantly outperforming all current alternatives for fetal brain extraction and alignment. BEAN achieved state-of-the-art performance for both tasks, with a mean Dice Similarity Coefficient (DSC) of 0.94 for the brain extraction masks, and a mean DSC of 0.93 for the alignment of the target brain masks. The presented experimental results show that brain structures such as the thalamus, choroid plexus, cavum septum pellucidum, and Sylvian fissure, are consistently aligned throughout the dataset and remain clearly visible when the scans are averaged together. The BEAN implementation and related code can be found under www.github.com/felipemoser/kelluwen.
Deep learning-based unlearning of dataset bias for MRI harmonisation and confound removal
•We demonstrate a flexible deep-learning-based harmonisation framework.•Applied to age prediction and segmentation tasks in a range of datasets.•Scanner information is removed, maintaining performance and improving generalisability.•The framework can be used with any feedforward network architecture.•It successfully removes additional confounds and works with varied distributions. Increasingly large MRI neuroimaging datasets are becoming available, including many highly multi-site multi-scanner datasets. Combining the data from the different scanners is vital for increased statistical power; however, this leads to an increase in variance due to nonbiological factors such as the differences in acquisition protocols and hardware, which can mask signals of interest. We propose a deep learning based training scheme, inspired by domain adaptation techniques, which uses an iterative update approach to aim to create scanner-invariant features while simultaneously maintaining performance on the main task of interest, thus reducing the influence of scanner on network predictions. We demonstrate the framework for regression, classification and segmentation tasks with two different network architectures. We show that not only can the framework harmonise many-site datasets but it can also adapt to many data scenarios, including biased datasets and limited training labels. Finally, we show that the framework can be extended for the removal of other known confounds in addition to scanner. The overall framework is therefore flexible and should be applicable to a wide range of neuroimaging studies.
Regional variations in fascicle curvatures within a muscle belly change during contraction
During muscle contraction, the fascicles curve in response to changes in internal pressures within the muscle. Muscle modelling studies have predicted that fascicles curve to different extents in different regions of the muscle and, as such, curvature is expected to vary along and across the muscle belly. In the present study, the local variations in fascicle curvature within the muscle belly were investigated for a range of contractile conditions. B-mode ultrasound scans of the medial and lateral gastrocnemii muscles were collected at five ankle positions—ranging from dorsiflexion to plantarflexion. An automated algorithm was applied to the images in order to extract the local curvatures from the muscle belly regions. Significant variations in fascicle curvature were seen in the superficial-to-deep direction. Curvatures were positive in the superficial layer, negative in the deep layer, and had intermediate values close to zero in the central muscle region. This is indicative of the fascicles following an S-shaped trajectory across the muscle image. The relation between external pressure and curvature regionalization was also investigated by applying elastic compression bandages on the calf. The application of pressure was associated with greater negative curvatures in the distal and central regions of the middle layer, but appeared to have little effect on the superficial and deep layers. The results from this study showed that (1) fascicle curvature increases with contraction level, (2) there is curvature regionalization within the muscle belly, (3) curvature increases with pressure, and (4) fascicles follow an S-shaped trajectory across the muscle images.
Learning patterns of the ageing brain in MRI using deep convolutional networks
•Brain age is estimated using a 3D CNN from 12,802 full T1-weighted images.•Regions used to drive predictions are different for linearly and nonlinearly registered data.•Linear registrations utilise a greater diversity of biologically meaningful areas.•Correlations with IDPs and non-imaging variables are consistent with other publications.•Excluding subjects with various health conditions had minimal impact on main correlations. Both normal ageing and neurodegenerative diseases cause morphological changes to the brain. Age-related brain changes are subtle, nonlinear, and spatially and temporally heterogenous, both within a subject and across a population. Machine learning models are particularly suited to capture these patterns and can produce a model that is sensitive to changes of interest, despite the large variety in healthy brain appearance. In this paper, the power of convolutional neural networks (CNNs) and the rich UK Biobank dataset, the largest database currently available, are harnessed to address the problem of predicting brain age. We developed a 3D CNN architecture to predict chronological age, using a training dataset of 12,802 T1-weighted MRI images and a further 6,885 images for testing. The proposed method shows competitive performance on age prediction, but, most importantly, the CNN prediction errors ΔBrainAge=AgePredicted−AgeTrue correlated significantly with many clinical measurements from the UK Biobank in the female and male groups. In addition, having used images from only one imaging modality in this experiment, we examined the relationship between ΔBrainAge and the image-derived phenotypes (IDPs) from all other imaging modalities in the UK Biobank, showing correlations consistent with known patterns of ageing. Furthermore, we show that the use of nonlinearly registered images to train CNNs can lead to the network being driven by artefacts of the registration process and missing subtle indicators of ageing, limiting the clinical relevance. Due to the longitudinal aspect of the UK Biobank study, in the future it will be possible to explore whether the ΔBrainAge from models such as this network were predictive of any health outcomes.
The impact of transfer learning on 3D deep learning convolutional neural network segmentation of the hippocampus in mild cognitive impairment and Alzheimer disease subjects
Research on segmentation of the hippocampus in magnetic resonance images through deep learning convolutional neural networks (CNNs) shows promising results, suggesting that these methods can identify small structural abnormalities of the hippocampus, which are among the earliest and most frequent brain changes associated with Alzheimer disease (AD). However, CNNs typically achieve the highest accuracy on datasets acquired from the same domain as the training dataset. Transfer learning allows domain adaptation through further training on a limited dataset. In this study, we applied transfer learning on a network called spatial warping network segmentation (SWANS), developed and trained in a previous study. We used MR images of patients with clinical diagnoses of mild cognitive impairment (MCI) and AD, segmented by two different raters. By using transfer learning techniques, we developed four new models, using different training methods. Testing was performed using 26% of the original dataset, which was excluded from training as a hold‐out test set. In addition, 10% of the overall training dataset was used as a hold‐out validation set. Results showed that all the new models achieved better hippocampal segmentation quality than the baseline SWANS model (ps < .001), with high similarity to the manual segmentations (mean dice [best model] = 0.878 ± 0.003). The best model was chosen based on visual assessment and volume percentage error (VPE). The increased precision in estimating hippocampal volumes allows the detection of small hippocampal abnormalities already present in the MCI phase (SD = [3.9 ± 0.6]%), which may be crucial for early diagnosis. In this study, we used transfer learning technique for the segmentation of the hippocampus, considering three datasets of patients with a clinical diagnosis of mild cognitive impairment and Alzheimer disease, scanned with different protocols. We started from a previously developed deep learning algorithm, trained with a different dataset, and we quantified the benefits given by the transfer learning, both in a numerical and visual way, using manual segmentations from two raters as a gold standard.
Cross‐Modality Comparison of Fetal Brain Phenotypes: Insights From Short‐Interval Second‐Trimester MRI and Ultrasound Imaging
Advances in fetal three‐dimensional (3D) ultrasound (US) and magnetic resonance imaging (MRI) have revolutionized the study of fetal brain development, enabling detailed analysis of brain structures and growth. Despite their complementary capabilities, these modalities capture fundamentally different physical signals, potentially leading to systematic differences in image‐derived phenotypes (IDPs). Here, we evaluate the agreement of IDPs between US and MRI by comparing the volumes of eight brain structures from 90 subjects derived using deep‐learning algorithms from majority same‐day imaging (days between scans: mean = 1.2, mode = 0 and max = 4). Excellent agreement (intra‐class correlation coefficient, ICC>0.75 $$ ICC>0.75 $$ ) was observed for the cerebellum, cavum septum pellucidum, thalamus, white matter and deep grey matter volumes, with significant correlations p<0.001 $$ \\left(p<0.001\\right) $$for most structures, except the ventricular system. Bland–Altman analysis revealed some systematic biases: intracranial and cortical plate volumes were larger on US than MRI, by an average of 35cm3 $$ 35\\ {\\mathrm{cm}}^3 $$and 4.1cm3 $$ 4.1\\ {\\mathrm{cm}}^3 $$ , respectively. Finally, we found the labels of the brainstem and ventricular system were not comparable between the modalities. These findings highlight the necessity of structure‐specific adjustments when interpreting fetal brain IPDs across modalities and underscore the complementary roles of US and MRI in advancing fetal neuroimaging. This study investigates the agreement of image derived‐phenotypes (IDPs) from eight fetal brain structures derived from same‐day MRI and 3D US volumes. Strong agreement was observed for the CSP, Th, CB, WMDGM, whereas, systematic biases were revealed for ICV and CoP.
Normative spatiotemporal fetal brain maturation with satisfactory development at 2 years
Maturation of the human fetal brain should follow precisely scheduled structural growth and folding of the cerebral cortex for optimal postnatal function 1 . We present a normative digital atlas of fetal brain maturation based on a prospective international cohort of healthy pregnant women 2 , selected using World Health Organization recommendations for growth standards 3 . Their fetuses were accurately dated in the first trimester, with satisfactory growth and neurodevelopment from early pregnancy to 2 years of age 4 , 5 . The atlas was produced using 1,059 optimal quality, three-dimensional ultrasound brain volumes from 899 of the fetuses and an automated analysis pipeline 6 – 8 . The atlas corresponds structurally to published magnetic resonance images 9 , but with finer anatomical details in deep grey matter. The between-study site variability represented less than 8.0% of the total variance of all brain measures, supporting pooling data from the eight study sites to produce patterns of normative maturation. We have thereby generated an average representation of each cerebral hemisphere between 14 and 31 weeks’ gestation with quantification of intracranial volume variability and growth patterns. Emergent asymmetries were detectable from as early as 14 weeks, with peak asymmetries in regions associated with language development and functional lateralization between 20 and 26 weeks’ gestation. These patterns were validated in 1,487 three-dimensional brain volumes from 1,295 different fetuses in the same cohort. We provide a unique spatiotemporal benchmark of fetal brain maturation from a large cohort with normative postnatal growth and neurodevelopment. A normative digital atlas of fetal brain maturation produced using 1,059 optimal quality, three-dimensional ultrasound brain volumes from 899 fetuses presents a unique spatiotemporal benchmark from a large cohort with normative postnatal growth and neurodevelopment at 2 years of age.
Development of an E-Learning Platform For EMTs In Ghana
Introduction:The continuous development of the knowledge and skill of the emergency medical technicians (EMTs) in Ghana is important for the success of the pre-hospital system. Due to distance and time constraints, an online e-learning platform is a good way to educate the Emergency Medicine Technicians in Ghana.Aim:The study looked at the feasibility of developing a distant learning module for the training and continuous medical education of EMTs.Methods:EMTs in the Ashanti Region were randomly selected to be part of the study. They received online lectures and notes that were accessible by their mobile phones. They all received a test at the end of each model. The study measured their willingness to participate, average attendance for each model, and the scores for each model test. The study also measured the overall feasibility of the distant learning program.Results:The study developed a training course comprised of 7 modules: trauma and surgical emergencies, obstetric emergencies, pediatric emergencies, disaster management, medical emergencies, basic ultrasound, and medical research. Tests and quizzes were electronically sent to EMTs over the course of the research period, with an average test score of 70.14% (low: 35%, high: 95%) for the cohort. Feedback from participants showed gains in knowledge and skill delivery. The average attendance for all model was 56.6% ranging from 47.37%-63.16% for the models. Challenges for attendance included internet access, heavy duties, and other personal reasons. The post-training interview showed 100% willingness to participate in future online programs with the most common reasons stated as low cost, ease of attendance for models, and reduced expense.Discussion:The study concluded that online, distant learning models can be used in Ghana for training and continuous medical education for EMTs. It is an easy and cost-effective model compared to a face-to-face model.