Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
16 result(s) for "Wetzer, Elisabeth"
Sort by:
Cross-modality sub-image retrieval using contrastive multimodal image representations
In tissue characterization and cancer diagnostics, multimodal imaging has emerged as a powerful technique. Thanks to computational advances, large datasets can be exploited to discover patterns in pathologies and improve diagnosis. However, this requires efficient and scalable image retrieval methods. Cross-modality image retrieval is particularly challenging, since images of similar (or even the same) content captured by different modalities might share few common structures. We propose a new application-independent content-based image retrieval (CBIR) system for reverse (sub-)image search across modalities, which combines deep learning to generate representations (embedding the different modalities in a common space) with robust feature extraction and bag-of-words models for efficient and reliable retrieval. We illustrate its advantages through a replacement study, exploring a number of feature extractors and learned representations, as well as through comparison to recent (cross-modality) CBIR methods. For the task of (sub-)image retrieval on a (publicly available) dataset of brightfield and second harmonic generation microscopy images, the results show that our approach is superior to all tested alternatives. We discuss the shortcomings of the compared methods and observe the importance of equivariance and invariance properties of the learned representations and feature extractors in the CBIR pipeline. Code is available at: https://github.com/MIDA-group/CrossModal_ImgRetrieval .
Knowledge and attitudes regarding pressure injuries among assistant nurses in a clinical context
This study aimed to evaluate assistant nurses' knowledge of and attitudes towards pressure injuries in a clinical setting. It employed a cross‐sectional design, using two validated surveys: PUKAT 2.0 and APUP, alongside open‐ended questions. A convenience sample of 88 assistant nurses from five wards across two departments at a 600‐bed university hospital in Sweden participated. Participants answered the questionnaire and open‐ended questions, followed by a learning seminar led by the study leader covering PUKAT 2.0 knowledge questions. The seminar ended with an evaluation of this training approach. Results revealed a significant knowledge gap in pressure injury prevention among assistant nurses, with a mean PUKAT 2.0 knowledge score of 33.8 and a standard deviation of ±11.7 (a score of 60 is deemed satisfactory). Only 3.4% (n = 3) of participants achieved a satisfactory knowledge score. However, attitudes towards pressure injury prevention, assessed by the APUP tool, were generally positive among the majority of the participants. Open‐ended questions and evaluations of the seminar showed assistant nurses' desire for pressure injury prevention training and their appreciation for the seminar format. Further studies need to evaluate recurrent training procedures and departmental strategies aimed at reducing the knowledge gap among healthcare staff.
Representation Learning and Information Fusion: Applications in Biomedical Image Processing
In recent years Machine Learning and in particular Deep Learning have excelled in object recognition and classification tasks in computer vision. As these methods extract features from the data itself by learning features that are relevant for a particular task, a key aspect of this remarkable success is the amount of data on which these methods train. Biomedical applications face the problem that the amount of training data is limited. In particular, labels and annotations are usually scarce and expensive to obtain as they require biological or medical expertise. One way to overcome this issue is to use additional knowledge about the data at hand. This guidance can come from expert knowledge, which puts focus on specific, relevant characteristics in the images, or geometric priors which can be used to exploit the spatial relationships in the images. This thesis presents machine learning methods for visual data that exploit such additional information and build upon classic image processing techniques, to combine the strengths of both model- and learning-based approaches. The thesis comprises five papers with applications in digital pathology. Two of them study the use and fusion of texture features within convolutional neural networks for image classification tasks. The other three papers study rotational equivariant representation learning, and show that learned, shared representations of multimodal images can be used for multimodal image registration and cross-modality image retrieval.
A robust and versatile deep learning model for prediction of the arterial input function in dynamic small animal 18F FDG PET imaging
Background Dynamic positron emission tomography (PET) and kinetic modeling are pivotal in advancing tracer development research in small animal studies. Accurate kinetic modeling requires precise input function estimation, traditionally achieved through arterial blood sampling. However, arterial cannulation in small animals, such as mice, involves intricate, time-consuming, and terminal procedures, precluding longitudinal studies. This work proposes a non-invasive, fully convolutional deep learning-based approach (FC-DLIF) to predict input functions directly from PET imaging data, which may eliminate the need for arterial blood sampling in the context of dynamic small-animal PET imaging. The proposed FC-DLIF model consists of a spatial feature extractor that acts on the volumetric time frames of the dynamic PET imaging sequence, extracting spatial features. These are subsequently further processed in a temporal feature extractor that predicts the arterial input function. The proposed approach is trained and evaluated using images and arterial blood curves from [ 18 F]FDG data using cross validation. Further, the model applicability is evaluated on imaging data and arterial blood curves collected using two additional radiotracers ([ 18 F]FDOPA, and [ 68 Ga]PSMA). The model was further evaluated on data truncated and shifted in time, to simulate shorter, and shifted, PET scans. Results The proposed FC-DLIF model reliably predicts the arterial input function with respect to mean squared error and correlation. Furthermore, the FC-DLIF model is able to predict the arterial input function even from truncated and shifted samples. The model fails to predict the AIF from samples collected using different radiotracers, as these are not represented in the training data. Conclusion Our deep learning-based input function offers a non-invasive and reliable alternative to arterial blood sampling, proving robust and flexible to temporal shifts and different scan durations.
Facilitating Ultrastructural Pathology through Automated Imaging and Analysis
Transmission electron microscopy (TEM) is an important diagnostic tool for analyzing human tissue at the nm scale. It is the only option, or gold standard, for diagnosing several disorders e.g. cilia and renal diseases, rare cancers etc. However, conventional TEM microscopes are highly manual, technically complex and a special environment is required to house the bulky and sensitive machines. Interpretation of information is subjective, time consuming, and relies on a high level of expertise which, unfortunately, is rare for this specialty within pathology. Here, we present methods and results from an ongoing project with the goal to develop a smart and easy to use platform for ultrastructural pathologic diagnoses. The platform is based on the recently developed MiniTEM instrument, a highly automated table-top TEM. In the project we develop image analysis methods for guided as well as fully automated search and analysis of structures of interest. In addition we enrich MiniTEM with an integrated database for convenient image handling and traceability. These points are identified by user representatives as crucial for creating a cost-effective diagnostic platform. We will show strategies and results for using image analysis and machine learning for automated search for objects/regions of interest at low magnification as well as combining multiple object instances acquired at high magnification to enhance nm details necessary for correct diagnosis. This will be exemplified for diagnosing primary cilia dyskinesia and renal disorders. The automation in imaging and analysis within the platform is a big step towards digital ultrapathology.
Physics-Informed Deep Learning for Improved Input Function Estimation in Motion-Blurred Dynamic \\(^18\\)FFDG PET Images
Kinetic modeling enables in vivo quantification of tracer uptake and glucose metabolism in [\\(^18\\)F]Fluorodeoxyglucose ([\\(^18\\)F]FDG) dynamic positron emission tomography (dPET) imaging of mice. However, kinetic modeling requires the accurate determination of the arterial input function (AIF) during imaging, which is time-consuming and invasive. Recent studies have shown the efficacy of using deep learning to directly predict the input function, surpassing established methods such as the image-derived input function (IDIF). In this work, we trained a physics-informed deep learning-based input function prediction model (PIDLIF) to estimate the AIF directly from the PET images, incorporating a kinetic modeling loss during training. The proposed method uses a two-tissue compartment model over two regions, the myocardium and brain of the mice, and is trained on a dataset of 70 [\\(^18\\)F]FDG dPET images of mice accompanied by the measured AIF during imaging. The proposed method had comparable performance to the network without a physics-informed loss, and when sudden movement causing blurring in the images was simulated, the PIDLIF model maintained high performance in severe cases of image degradation. The proposed physics-informed method exhibits an improved robustness that is promoted by physically constraining the problem, enforcing consistency for out-of-distribution samples. In conclusion, the PIDLIF model offers insight into the effects of leveraging physiological distribution mechanics in mice to guide a deep learning-based AIF prediction network in images with severe degradation as a result of blurring due to movement during imaging.
Can representation learning for multimodal image registration be improved by supervision of intermediate layers?
Multimodal imaging and correlative analysis typically require image alignment. Contrastive learning can generate representations of multimodal images, reducing the challenging task of multimodal image registration to a monomodal one. Previously, additional supervision on intermediate layers in contrastive learning has improved biomedical image classification. We evaluate if a similar approach improves representations learned for registration to boost registration performance. We explore three approaches to add contrastive supervision to the latent features of the bottleneck layer in the U-Nets encoding the multimodal images and evaluate three different critic functions. Our results show that representations learned without additional supervision on latent features perform best in the downstream task of registration on two public biomedical datasets. We investigate the performance drop by exploiting recent insights in contrastive learning in classification and self-supervised learning. We visualize the spatial relations of the learned representations by means of multidimensional scaling, and show that additional supervision on the bottleneck layer can lead to partial dimensional collapse of the intermediate embedding space.
Keypoint Counting Classifiers: Turning Vision Transformers into Self-Explainable Models Without Training
Current approaches for designing self-explainable models (SEMs) require complicated training procedures and specific architectures which makes them impractical. With the advance of general purpose foundation models based on Vision Transformers (ViTs), this impracticability becomes even more problematic. Therefore, new methods are necessary to provide transparency and reliability to ViT-based foundation models. In this work, we present a new method for turning any well-trained ViT-based model into a SEM without retraining, which we call Keypoint Counting Classifiers (KCCs). Recent works have shown that ViTs can automatically identify matching keypoints between images with high precision, and we build on these results to create an easily interpretable decision process that is inherently visualizable in the input. We perform an extensive evaluation which show that KCCs improve the human-machine communication compared to recent baselines. We believe that KCCs constitute an important step towards making ViT-based foundation models more transparent and reliable.
Fast Voxel-Wise Kinetic Modeling in Dynamic PET using a Physics-Informed CycleGAN
Tracer kinetic modeling serves a vital role in diagnosis, treatment planning, tracer development and oncology, but burdens practitioners with complex and invasive arterial input function estimation (AIF). We adopt a physics-informed CycleGAN showing promise in DCE-MRI quantification to dynamic PET quantification. Our experiments demonstrate sound AIF predictions and parameter maps closely resembling the reference.