Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Language
      Language
      Clear All
      Language
  • Subject
      Subject
      Clear All
      Subject
  • Item Type
      Item Type
      Clear All
      Item Type
  • Discipline
      Discipline
      Clear All
      Discipline
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
9 result(s) for "Perkonigg, Matthias"
Sort by:
Dynamic memory to alleviate catastrophic forgetting in continual learning with medical imaging
Medical imaging is a central part of clinical diagnosis and treatment guidance. Machine learning has increasingly gained relevance because it captures features of disease and treatment response that are relevant for therapeutic decision-making. In clinical practice, the continuous progress of image acquisition technology or diagnostic procedures, the diversity of scanners, and evolving imaging protocols hamper the utility of machine learning, as prediction accuracy on new data deteriorates, or models become outdated due to these domain shifts. We propose a continual learning approach to deal with such domain shifts occurring at unknown time points. We adapt models to emerging variations in a continuous data stream while counteracting catastrophic forgetting. A dynamic memory enables rehearsal on a subset of diverse training data to mitigate forgetting while enabling models to expand to new domains. The technique balances memory by detecting pseudo-domains, representing different style clusters within the data stream. Evaluation of two different tasks, cardiac segmentation in magnetic resonance imaging and lung nodule detection in computed tomography, demonstrate a consistent advantage of the method. In clinical practice, the continuous progress of image acquisition technology or diagnostic procedures and evolving imaging protocols hamper the utility of machine learning, as prediction accuracy on new data deteriorates. Here, the authors propose a continual learning approach to deal with such domain shifts occurring at unknown time points.
Correlation of histologic, imaging, and artificial intelligence features in NAFLD patients, derived from Gd-EOB-DTPA-enhanced MRI: a proof-of-concept study
Objective To compare unsupervised deep clustering (UDC) to fat fraction (FF) and relative liver enhancement (RLE) on Gd-EOB-DTPA-enhanced MRI to distinguish simple steatosis from non-alcoholic steatohepatitis (NASH), using histology as the gold standard. Materials and methods A derivation group of 46 non-alcoholic fatty liver disease (NAFLD) patients underwent 3-T MRI. Histology assessed steatosis, inflammation, ballooning, and fibrosis. UDC was trained to group different texture patterns from MR data into 10 distinct clusters per sequence on unenhanced T1- and Gd-EOB-DTPA-enhanced T1-weighted hepatobiliary phase (T1-Gd-EOB-DTPA-HBP), then on T1 in- and opposed-phase images. RLE and FF were quantified on identical sequences. Differences of these parameters between NASH and simple steatosis were evaluated with χ 2 - and t -tests, respectively. Linear regression and Random Forest classifier were performed to identify associations between histological NAFLD features, RLE, FF, and UDC patterns, and then determine predictors able to distinguish simple steatosis from NASH. ROC curves assessed diagnostic performance of UDC, RLE, and FF. Finally, we tested these parameters on 30 validation cohorts. Results For the derivation group, UDC-derived features from unenhanced and T1-Gd-EOB-DTPA-HBP, plus from T1 in- and opposed-phase, distinguished NASH from simple steatosis ( p  ≤ 0.001 and p  = 0.02, respectively) with 85% and 80% accuracy, respectively, while RLE and FF distinguished NASH from simple steatosis ( p  ≤ 0.001 and p  = 0.004, respectively), with 83% and 78% accuracy, respectively. On multivariate regression analysis, RLE and FF correlated only with fibrosis ( p  = 0.040) and steatosis ( p  ≤ 0.001), respectively. Conversely, UDC features, using Random Forest classifier predictors, correlated with all histologic NAFLD components. The validation group confirmed these results for both approaches. Conclusion UDC, RLE, and FF could independently separate NASH from simple steatosis. UDC may predict all histologic NAFLD components. Clinical relevance statement Using gadoxetic acid–enhanced MR, fat fraction (FF > 5%) can diagnose NAFLD, and relative liver enhancement can distinguish NASH from simple steatosis. Adding AI may let us non-invasively estimate the histologic components, i.e., fat, ballooning, inflammation, and fibrosis, the latter the main prognosticator. Key Points • Unsupervised deep clustering (UDC) and MR-based parameters (FF and RLE) could independently distinguish simple steatosis from NASH in the derivation group. • On multivariate analysis, RLE could predict only fibrosis, and FF could predict only steatosis; however, UDC could predict all histologic NAFLD components in the derivation group. • The validation cohort confirmed the findings for the derivation group.
Continual Active Learning for Efficient Adaptation of Machine Learning Models to Changing Image Acquisition
Imaging in clinical routine is subject to changing scanner protocols, hardware, or policies in a typically heterogeneous set of acquisition hardware. Accuracy and reliability of deep learning models suffer from those changes as data and targets become inconsistent with their initial static training set. Continual learning can adapt to a continuous data stream of a changing imaging environment. Here, we propose a method for continual active learning on a data stream of medical images. It recognizes shifts or additions of new imaging sources - domains -, adapts training accordingly, and selects optimal examples for labelling. Model training has to cope with a limited labelling budget, resembling typical real world scenarios. We demonstrate our method on T1-weighted magnetic resonance images from three different scanners with the task of brain age estimation. Results demonstrate that the proposed method outperforms naive active learning while requiring less manual labelling.
Continual Active Learning Using Pseudo-Domains for Limited Labelling Resources and Changing Acquisition Characteristics
Machine learning in medical imaging during clinical routine is impaired by changes in scanner protocols, hardware, or policies resulting in a heterogeneous set of acquisition settings. When training a deep learning model on an initial static training set, model performance and reliability suffer from changes of acquisition characteristics as data and targets may become inconsistent. Continual learning can help to adapt models to the changing environment by training on a continuous data stream. However, continual manual expert labelling of medical imaging requires substantial effort. Thus, ways to use labelling resources efficiently on a well chosen sub-set of new examples is necessary to render this strategy feasible. Here, we propose a method for continual active learning operating on a stream of medical images in a multi-scanner setting. The approach automatically recognizes shifts in image acquisition characteristics - new domains -, selects optimal examples for labelling and adapts training accordingly. Labelling is subject to a limited budget, resembling typical real world scenarios. To demonstrate generalizability, we evaluate the effectiveness of our method on three tasks: cardiac segmentation, lung nodule detection and brain age estimation. Results show that the proposed approach outperforms other active learning methods, while effectively counteracting catastrophic forgetting.
Unsupervised deep clustering for predictive texture pattern discovery in medical images
Predictive marker patterns in imaging data are a means to quantify disease and progression, but their identification is challenging, if the underlying biology is poorly understood. Here, we present a method to identify predictive texture patterns in medical images in an unsupervised way. Based on deep clustering networks, we simultaneously encode and cluster medical image patches in a low-dimensional latent space. The resulting clusters serve as features for disease staging, linking them to the underlying disease. We evaluate the method on 70 T1-weighted magnetic resonance images of patients with different stages of liver steatosis. The deep clustering approach is able to find predictive clusters with a stable ranking, differentiating between low and high steatosis with an F1-Score of 0.78.
Pseudo-domains in imaging data improve prediction of future disease status in multi-center studies
In multi-center randomized clinical trials imaging data can be diverse due to acquisition technology or scanning protocols. Models predicting future outcome of patients are impaired by this data heterogeneity. Here, we propose a prediction method that can cope with a high number of different scanning sites and a low number of samples per site. We cluster sites into pseudo-domains based on visual appearance of scans, and train pseudo-domain specific models. Results show that they improve the prediction accuracy for steatosis after 48 weeks from imaging data acquired at an initial visit and 12-weeks follow-up in liver disease
CHAOS Challenge -- Combined (CT-MR) Healthy Abdominal Organ Segmentation
Segmentation of abdominal organs has been a comprehensive, yet unresolved, research field for many years. In the last decade, intensive developments in deep learning (DL) have introduced new state-of-the-art segmentation systems. In order to expand the knowledge on these topics, the CHAOS - Combined (CT-MR) Healthy Abdominal Organ Segmentation challenge has been organized in conjunction with IEEE International Symposium on Biomedical Imaging (ISBI), 2019, in Venice, Italy. CHAOS provides both abdominal CT and MR data from healthy subjects for single and multiple abdominal organ segmentation. Five different but complementary tasks have been designed to analyze the capabilities of current approaches from multiple perspectives. The results are investigated thoroughly, compared with manual annotations and interactive methods. The analysis shows that the performance of DL models for single modality (CT / MR) can show reliable volumetric analysis performance (DICE: 0.98 \\(\\pm\\) 0.00 / 0.95 \\(\\pm\\) 0.01) but the best MSSD performance remain limited (21.89 \\(\\pm\\) 13.94 / 20.85 \\(\\pm\\) 10.63 mm). The performances of participating models decrease significantly for cross-modality tasks for the liver (DICE: 0.88 \\(\\pm\\) 0.15 MSSD: 36.33 \\(\\pm\\) 21.97 mm) and all organs (DICE: 0.85 \\(\\pm\\) 0.21 MSSD: 33.17 \\(\\pm\\) 38.93 mm). Despite contrary examples on different applications, multi-tasking DL models designed to segment all organs seem to perform worse compared to organ-specific ones (performance drop around 5\\%). Besides, such directions of further research for cross-modality segmentation would significantly support real-world clinical applications. Moreover, having more than 1500 participants, another important contribution of the paper is the analysis on shortcomings of challenge organizations such as the effects of multiple submissions and peeking phenomena.
Dynamic memory to alleviate catastrophic forgetting in continuous learning settings
In medical imaging, technical progress or changes in diagnostic procedures lead to a continuous change in image appearance. Scanner manufacturer, reconstruction kernel, dose, other protocol specific settings or administering of contrast agents are examples that influence image content independent of the scanned biology. Such domain and task shifts limit the applicability of machine learning algorithms in the clinical routine by rendering models obsolete over time. Here, we address the problem of data shifts in a continuous learning scenario by adapting a model to unseen variations in the source domain while counteracting catastrophic forgetting effects. Our method uses a dynamic memory to facilitate rehearsal of a diverse training data subset to mitigate forgetting. We evaluated our approach on routine clinical CT data obtained with two different scanner protocols and synthetic classification tasks. Experiments show that dynamic memory counters catastrophic forgetting in a setting with multiple data shifts without the necessity for explicit knowledge about when these shifts occur.
Asymmetric Cascade Networks for Focal Bone Lesion Prediction in Multiple Myeloma
The reliable and timely stratification of bone lesion evolution risk in smoldering Multiple Myeloma plays an important role in identifying prime markers of the disease's advance and in improving the patients' outcome. In this work we provide an asymmetric cascade network for the longitudinal prediction of future bone lesions for T1 weighted whole body MR images. The proposed cascaded architecture, consisting of two distinct configured U-Nets, first detects the bone regions and subsequently predicts lesions within bones in a patch based way. The algorithm provides a full volumetric risk score map for the identification of early signatures of emerging lesions and for visualising high risk locations. The prediction accuracy is evaluated on a longitudinal dataset of 63 multiple myeloma patients.