Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
44 result(s) for "Datoriserad bildbehandling"
Sort by:
Automated Training of Deep Convolutional Neural Networks for Cell Segmentation
Deep Convolutional Neural Networks (DCNN) have recently emerged as superior for many image segmentation tasks. The DCNN performance is however heavily dependent on the availability of large amounts of problem-specific training samples. Here we show that DCNNs trained on ground truth created automatically using fluorescently labeled cells, perform similar to manual annotations.
Finish line distinctness and accuracy in 7 intraoral scanners versus conventional impression: an in vitro descriptive comparison
Background Several studies have evaluated accuracy of intraoral scanners (IOS), but data is lacking regarding variations between IOS systems in the depiction of the critical finish line and the finish line accuracy. The aim of this study was to analyze the level of finish line distinctness (FLD), and finish line accuracy (FLA), in 7 intraoral scanners (IOS) and one conventional impression (IMPR). Furthermore, to assess parameters of resolution, tessellation, topography, and color. Methods A dental model with a crown preparation including supra and subgingival finish line was reference-scanned with an industrial scanner (ATOS), and scanned with seven IOS: 3M, CS3500 and CS3600, DWIO, Omnicam, Planscan and Trios. An IMPR was taken and poured, and the model was scanned with a laboratory scanner. The ATOS scan was cropped at finish line and best-fit aligned for 3D Compare Analysis (Geomagic). Accuracy was visualized, and descriptive analysis was performed. Results All IOS, except Planscan, had comparable overall accuracy, however, FLD and FLA varied substantially. Trios presented the highest FLD, and with CS3600, the highest FLA. 3M, and DWIO had low overall FLD and low FLA in subgingival areas, whilst Planscan had overall low FLD and FLA, as well as lower general accuracy. IMPR presented high FLD, except in subgingival areas, and high FLA. Trios had the highest resolution by factor 1.6 to 3.1 among IOS, followed by IMPR, DWIO, Omnicam, CS3500, 3M, CS3600 and Planscan. Tessellation was found to be non-uniform except in 3M and DWIO. Topographic variation was found for 3M and Trios, with deviations below +/− 25 μm for Trios. Inclusion of color enhanced the identification of the finish line in Trios, Omnicam and CS3600, but not in Planscan. Conclusions There were sizeable variations between IOS with both higher and lower FLD and FLA than IMPR. High FLD was more related to high localized finish line resolution and non-uniform tessellation, than to high overall resolution. Topography variations were low. Color improved finish line identification in some IOS. It is imperative that clinicians critically evaluate the digital impression, being aware of varying technical limitations among IOS, in particular when challenging subgingival conditions apply.
An image registration method for voxel-wise analysis of whole-body oncological PET-CT
Whole-body positron emission tomography-computed tomography (PET-CT) imaging in oncology provides comprehensive information of each patient’s disease status. However, image interpretation of volumetric data is a complex and time-consuming task. In this work, an image registration method targeted towards computer-aided voxel-wise analysis of whole-body PET-CT data was developed. The method used both CT images and tissue segmentation masks in parallel to spatially align images step-by-step. To evaluate its performance, a set of baseline PET-CT images of 131 classical Hodgkin lymphoma (cHL) patients and longitudinal image series of 135 head and neck cancer (HNC) patients were registered between and within subjects according to the proposed method. Results showed that major organs and anatomical structures generally were registered correctly. Whole-body inverse consistency vector and intensity magnitude errors were on average less than 5 mm and 45 Hounsfield units respectively in both registration tasks. Image registration was feasible in time and the nearly automatic pipeline enabled efficient image processing. Metabolic tumor volumes of the cHL patients and registration-derived therapy-related tissue volume change of the HNC patients mapped to template spaces confirmed proof-of-concept. In conclusion, the method established a robust point-correspondence and enabled quantitative visualization of group-wise image features on voxel level.
Human Immunodeficiency Virus-Infected Women Have High Numbers of CD103−CD8+ T Cells Residing Close to the Basal Membrane of the Ectocervical Epithelium
Genital mucosa is the main portal of entry for various incoming pathogens, including human immunodeficiency virus (HIV), hence it is an important site for host immune defenses. Tissue-resident memory T (TRM) cells defend tissue barriers against infections and are characterized by expression of CD103 and CD69. In this study, we describe the composition of CD8+ TRM cells in the ectocervix of healthy and HIV-infected women. Study samples were collected from healthy Swedish and Kenyan HIV-infected and uninfected women. Customized computerized image-based in situ analysis was developed to assess the ectocervical biopsies. Genital mucosa and blood samples were assessed by flow cytometry. Although the ectocervical epithelium of healthy women was populated with bona fide CD8+ TRM cells (CD103+CD69+), women infected with HIV displayed a high frequency of CD103-CD8+ cells residing close to their epithelial basal membrane. Accumulation of CD103-CD8+ cells was associated with chemokine expression in the ectocervix and HIV viral load. CD103+CD8+ and CD103-CD8+ T cells expressed cytotoxic effector molecules in the ectocervical epithelium of healthy and HIV-infected women. In addition, women infected with HIV had decreased frequencies of circulating CD103+CD8+ T cells. Our data provide insight into the distribution of CD8+ TRM cells in human genital mucosa, a critically important location for immune defense against pathogens, including HIV.
Facilitating Ultrastructural Pathology through Automated Imaging and Analysis
Transmission electron microscopy (TEM) is an important diagnostic tool for analyzing human tissue at the nm scale. It is the only option, or gold standard, for diagnosing several disorders e.g. cilia and renal diseases, rare cancers etc. However, conventional TEM microscopes are highly manual, technically complex and a special environment is required to house the bulky and sensitive machines. Interpretation of information is subjective, time consuming, and relies on a high level of expertise which, unfortunately, is rare for this specialty within pathology. Here, we present methods and results from an ongoing project with the goal to develop a smart and easy to use platform for ultrastructural pathologic diagnoses. The platform is based on the recently developed MiniTEM instrument, a highly automated table-top TEM. In the project we develop image analysis methods for guided as well as fully automated search and analysis of structures of interest. In addition we enrich MiniTEM with an integrated database for convenient image handling and traceability. These points are identified by user representatives as crucial for creating a cost-effective diagnostic platform. We will show strategies and results for using image analysis and machine learning for automated search for objects/regions of interest at low magnification as well as combining multiple object instances acquired at high magnification to enhance nm details necessary for correct diagnosis. This will be exemplified for diagnosing primary cilia dyskinesia and renal disorders. The automation in imaging and analysis within the platform is a big step towards digital ultrapathology.
Label-free deep learning-based species classification of bacteria imaged by phase-contrast microscopy
Reliable detection and classification of bacteria and other pathogens in the human body, animals, food, and water is crucial for improving and safeguarding public health. For instance, identifying the species and its antibiotic susceptibility is vital for effective bacterial infection treatment. Here we show that phase contrast time-lapse microscopy combined with deep learning is sufficient to classify four species of bacteria relevant to human health. The classification is performed on living bacteria and does not require fixation or staining, meaning that the bacterial species can be determined as the bacteria reproduce in a microfluidic device, enabling parallel determination of susceptibility to antibiotics. We assess the performance of convolutional neural networks and vision transformers, where the best model attained a class-average accuracy exceeding 98%. Our successful proof-of-principle results suggest that the methods should be challenged with data covering more species and clinically relevant isolates for future clinical use.
Deep multiple instance learning versus conventional deep single instance learning for interpretable oral cancer detection
The current medical standard for setting an oral cancer (OC) diagnosis is histological examination of a tissue sample taken from the oral cavity. This process is time-consuming and more invasive than an alternative approach of acquiring a brush sample followed by cytological analysis. Using a microscope, skilled cytotechnologists are able to detect changes due to malignancy; however, introducing this approach into clinical routine is associated with challenges such as a lack of resources and experts. To design a trustworthy OC detection system that can assist cytotechnologists, we are interested in deep learning based methods that can reliably detect cancer, given only per-patient labels (thereby minimizing annotation bias), and also provide information regarding which cells are most relevant for the diagnosis (thereby enabling supervision and understanding). In this study, we perform a comparison of two approaches suitable for OC detection and interpretation: (i) conventional single instance learning (SIL) approach and (ii) a modern multiple instance learning (MIL) method. To facilitate systematic evaluation of the considered approaches, we, in addition to a real OC dataset with patient-level ground truth annotations, also introduce a synthetic dataset—PAP-QMNIST. This dataset shares several properties of OC data, such as image size and large and varied number of instances per bag, and may therefore act as a proxy model of a real OC dataset, while, in contrast to OC data, it offers reliable per-instance ground truth, as defined by design. PAP-QMNIST has the additional advantage of being visually interpretable for non-experts, which simplifies analysis of the behavior of methods. For both OC and PAP-QMNIST data, we evaluate performance of the methods utilizing three different neural network architectures. Our study indicates, somewhat surprisingly, that on both synthetic and real data, the performance of the SIL approach is better or equal to the performance of the MIL approach. Visual examination by cytotechnologist indicates that the methods manage to identify cells which deviate from normality, including malignant cells as well as those suspicious for dysplasia. We share the code as open source.
An image analysis toolbox for high-throughput C. elegans assays
The freely available WormToolbox enables high-throughput image analysis of a variety of phenotypes of Caenorhabditis elegans in liquid culture and should prove useful for image-based screens. We present a toolbox for high-throughput screening of image-based Caenorhabditis elegans phenotypes. The image analysis algorithms measure morphological phenotypes in individual worms and are effective for a variety of assays and imaging systems. This WormToolbox is available through the open-source CellProfiler project and enables objective scoring of whole-worm high-throughput image-based assays of C. elegans for the study of diverse biological pathways that are relevant to human disease.
Improved geometric accuracy of whole body diffusion-weighted imaging at 1.5T and 3T using reverse polarity gradients
Whole body diffusion-weighted imaging (WB-DWI) is increasingly used in oncological applications, but suffers from misalignments due to susceptibility-induced geometric distortion. As such, DWI and structural images acquired in the same scan session are not geometrically aligned, leading to difficulties in e.g. lesion detection and segmentation. In this work we assess the performance of the reverse polarity gradient (RPG) method for correction of WB-DWI geometric distortion. Multi-station DWI and structural magnetic resonance imaging (MRI) data of healthy controls were acquired at 1.5T (n = 20) and 3T (n = 20). DWI data was distortion corrected using the RPG method based on b = 0 s/mm 2 (b0) and b = 50 s/mm 2 (b50) DWI acquisitions. Mutual information  (MI) between low b-value DWI and structural data increased with distortion correction ( P  < 0.05), while improvements in region of interest (ROI) based similarity metrics, comparing the position of incidental findings on DWI and structural data, were location dependent. Small numerical differences between non-corrected and distortion corrected apparent diffusion coefficient (ADC) values were measured. Visually, the distortion correction improved spine alignment at station borders, but introduced registration-based artefacts mainly for the spleen and kidneys. Overall, the RPG distortion correction gave an improved geometric accuracy for WB-DWI data acquired at 1.5T and 3T. The b0- and b50-based distortion corrections had a very similar performance.
INSPIRE: Intensity and spatial information-based deformable image registration
We present INSPIRE, a top-performing general-purpose method for deformable image registration. INSPIRE brings distance measures which combine intensity and spatial information into an elastic B-splines-based transformation model and incorporates an inverse inconsistency penalization supporting symmetric registration performance. We introduce several theoretical and algorithmic solutions which provide high computational efficiency and thereby applicability of the proposed framework in a wide range of real scenarios. We show that INSPIRE delivers highly accurate, as well as stable and robust registration results. We evaluate the method on a 2D dataset created from retinal images, characterized by presence of networks of thin structures. Here INSPIRE exhibits excellent performance, substantially outperforming the widely used reference methods. We also evaluate INSPIRE on the Fundus Image Registration Dataset (FIRE), which consists of 134 pairs of separately acquired retinal images. INSPIRE exhibits excellent performance on the FIRE dataset, substantially outperforming several domain-specific methods. We also evaluate the method on four benchmark datasets of 3D magnetic resonance images of brains, for a total of 2088 pairwise registrations. A comparison with 17 other state-of-the-art methods reveals that INSPIRE provides the best overall performance. Code is available at github.com/MIDA-group/inspire .